1 / 23

Demonstrator Slice Possibilities and Timetable

Demonstrator Slice Possibilities and Timetable. Motivation Implications Staging Schedule Conclusions. Why?. What do we seek to accomplish by building a demonstrator system? Prove viability of concept and technology Verify inter-operability of modules

afi
Download Presentation

Demonstrator Slice Possibilities and Timetable

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Demonstrator Slice Possibilitiesand Timetable Motivation Implications Staging Schedule Conclusions Ian Brawn

  2. Why? • What do we seek to accomplish by building a demonstrator system? • Prove viability of concept and technology • Verify inter-operability of modules • Test functionality of system as a whole • Explore the phase space of module interactions • Uncover any limits not obvious in isolation • Catch any bugs before we build the final system • Provide a development platform for firmware and software • Gain experience with the system  • Control • Monitoring • Readout • Gain experience interfacing with external hardware • Uncover any deficiencies in their hardware or ours • Build working relationship with external colleagues Ian Brawn

  3. Implications for the Demonstrator • Demonstrator should… • Have full functionality of final system • Complete slice through processing chain • Be capable of interfacing to every external system required by final system, or prototypes/demonstrators of such • Process sufficiently wide area to allow adequate test of algorithms • Eg, data sharing between modules for e/g algorithm; de-clustering • What does sufficiently wide mean for global, topological algorithms? • Contain at least 1 prototype for every type of module in the system • For more technically challenging/unprecedented modules these should be preceded by demonstrators • Demonstrator: To be useful in the demonstrator system, demonstrator module is a scaled down version of proposed module; same functionality, fewer instances • Prototype: no anticipated design differences from proposed module Ian Brawn

  4. L1 Calo FEX L0 Calo FEX A L0 Calo FEX B The Final Demonstrator • Above picture presupposes an architecture on which we haven’t settled • calorimeter feature extraction (eg, jet, e/g) handled by 2 types of module, A/B • This is what we work towards, not what we start with Track Trigger L0 Muon L1 Muon L0A L01 Cal ROD course towers L0 Topo (global) L0 CTP L1 Topo (global) L1 CTP course towers fine granularity minitowers Ian Brawn

  5. L1 Calo FEX L0 Calo FEX A L0 Calo FEX B The Final Demonstrator • Above picture presupposes an architecture on which we haven’t settled • calorimeter feature extraction (eg, jet, e/g) handled by 2 types of module, A/B • This is what we work towards, not what we start with Track Trigger L0 Muon L1 Muon L0A L01 Cal ROD course towers L0 Topo (global) L0 CTP L1 Topo (global) L1 CTP course towers fine granularity minitowers ROD ROD ROD ROD Ian Brawn

  6. L1 Calo FEX L0 Calo FEX A L0 Calo FEX B The Final Demonstrator • Above picture presupposes an architecture on which we haven’t settled • calorimeter feature extraction (eg, jet, e/g) handled by 2 types of module, A/B • This is what we work towards, not what we start with Track Trigger L0 Muon L1 Muon L0A L01 Cal ROD course towers L0 Topo (global) L0 CTP L1 Topo (global) L1 CTP course towers fine granularity minitowers GBT TCM ROD ROD ROD ROD Ian Brawn

  7. Staging • Can’t produced everything at once  staging required • Factors determining schedule • Allocation of modules to institutes determines what we can build in parallel • Not attempted to address that here • Obvious demonstrator/prototypes should be built by institute building final item • Some modules are of more use in isolation than others • Availability of external hardware to which we want to interface • Mitigation strategies for scheduling problems • Standardized optical links allow modules to be connected in alternative configurations • Eg, bypass L0 CTP or use L0Topo as test source for L0 Calo FEX A • Implement DAQ buffering, readout, GBT in FPGA on each board, allowing ROD to be bypassed. • All of these options require custom firmware • Take care not to generate large firmware overheads not necessary for final system Ian Brawn

  8. External Systems • Muons • RPCs (amongst other things) being upgraded • MUCTPI being upgraded (under study)  source for e/g veto? • Calorimeter • Digitized front ends • Trigger data to arrive via ROD • Hybrid analogue/digital RODs available in ATLAS Q3+Q4 2013 • Available for full Calorimeter 2015–2017 • Track trigger • FTK (Fast Tracker) • Real-time track processor • Receives events at L1A rate from RODs • Output = 300 tracks/event (@ 3 x 1034) • 1st prototypes arriving in 2012, • Barrel-only system 2014 • Full system 2016 • Use FTK as proto track? • CTP • GBTx • (Rad hard) ASIC submissions 2011 • Firmware implementation available now Ian Brawn

  9. Stage 1  Ready 2013 L0 Calo FEX A Connects to Calo hybrid RODs Run parasitically, in tandem with current trigger & compare results L0 Topo Connects to … L0 Calo FEX A (multiple), CMM++, FTK, MUCTPI, CTP Path from RODL0CTP data Stage 2 ROD Connects to (DAQ via) GBT, L0 Calo FEX A, L0 Topo Only at this point does demonstrator start to look like final system to on-line software Stage 3 L0 Calo FEX B Lower priority because doesn’t provide new external interface TCM (Timing and Control Module) If necessary Level 0 Staging Cal proto RODs L0 Calo FEX A Ian Brawn

  10. Stage 1  Ready 2013 L0 Calo FEX A Connects to Calo hybrid RODs Run parasitically, in tandem with current trigger & compare results L0 Topo Connects to … L0 Calo FEX A (multiple), CMM++, FTK, MUCTPI, CTP Path from RODL0CTP data Stage 2 ROD Connects to (DAQ via) GBT, L0 Calo FEX A, L0 Topo Only at this point does demonstrator start to look like final system to on-line software Stage 3 L0 Calo FEX B Lower priority because doesn’t provide new external interface TCM (Timing and Control Module) If necessary L0 Calo FEX A Level 0 Staging L0 Muon FTK CMM++ Cal proto RODs L0 Topo (global) L0 CTP Ian Brawn

  11. Stage 1  Ready 2013 L0 Calo FEX A Connects to Calo hybrid RODs Run parasitically, in tandem with current trigger & compare results L0 Topo Connects to … L0 Calo FEX A (multiple), CMM++, FTK, MUCTPI, CTP Path from RODL0CTP data Stage 2 ROD Connects to (DAQ via) GBT, L0 Calo FEX A, L0 Topo Only at this point does demonstrator start to look like final system to on-line software Stage 3 L0 Calo FEX B Lower priority because doesn’t provide new external interface TCM (Timing and Control Module) If necessary L0 Calo FEX A Level 0 Staging L0 Muon FTK CMM++ Cal proto RODs L0 Topo (global) L0 CTP ROD ROD Ian Brawn

  12. Stage 1  Ready 2013 L0 Calo FEX A Connects to Calo hybrid RODs Run parasitically, in tandem with current trigger & compare results L0 Topo Connects to … L0 Calo FEX A (multiple), CMM++, FTK, MUCTPI, CTP Path from RODL0CTP data Stage 2 ROD Connects to (DAQ via) GBT, L0 Calo FEX A, L0 Topo Only at this point does demonstrator start to look like final system to on-line software Stage 3 L0 Calo FEX B Lower priority because doesn’t provide new external interface TCM (Timing and Control Module) If necessary L0 Calo FEX A L0 Calo FEX B Level 0 Staging L0 Muon FTK CMM++ Cal proto RODs L0 Topo (global) L0 CTP TCM ROD ROD Ian Brawn

  13. Level 1 Staging • Slower schedule than L0 • Connects to downstream modules on slower schedules than Calo ROD prototypes, for example. • Whereas L0 looks similar to current trigger  pipelined, fixed latency, FPGA-based, etc L1 is less familiar and less well defined • Extended demonstrator programme • Possibility of using COTS (Commercial Off-The-Shelf) boards initially • Eg, to evaluate GPU- or CPU-based implementations of algorithms • Run parasitically from L0 demonstrator • Design consideration for L0 Topo demonstrator • Conceivable some subset of L1 will be built using COTS. However, any custom hardware will need to be prototyped, and maybe demonstrated before that. • Programme: • COTS-based, algorithm demonstrator  technology demonstrator  prototype • Scheduling overlaps with L0 programme • Design so that L0CTP can be bypassed Ian Brawn

  14. Supporting Hardware • TCM (Timing and Control Module) • Do we need/want a module for clock distribution and packet routing for control and readout? (GBT) • Coupled to backplane architecture • Initially avoid by placing this functionality on individual cards • May require TCM for final system • DSS (Data Source/Sink) • Do we need/want a module to act as a generic source/sink of test data for the demonstrator system? • Fill gaps in system due to unavailable modules (internal/external) • Use of standard interfaces (eg, SNAP12) could render h/w comparatively simple • Compared to DSS for current system • Use of standard interfaces may render unnecessary • But this might not save us any work due to increased f/w load • Schedule presented here assumes TCM required but not DSS Ian Brawn

  15. Existing Schedule (1) Ian Brawn

  16. Existing Schedule (2) Ian Brawn

  17. Existing Schedule (3) Ian Brawn

  18. Existing Schedule (4) Ian Brawn

  19. Existing Schedule (5) Ian Brawn

  20. Existing Schedule (6) Ian Brawn

  21. Schedule Caveats & Notes • The following schedule is only a sketch • More detailed schedule unwarranted due to uncertainties in project • Assumes architecture for system that is by no means certain • Mostly, only hardware design and production shown • Schedule for specification, firmware, software, etc, must be inferred • Under estimates minimum no. iterations of demonstrator/prototype modules built • Under estimates extent to which tasks can be run in parallel • No attempt to split tasks between institutes Ian Brawn

  22. Slice Demo Schedule Ian Brawn

  23. Conclusions • There is a lot of new hardware to be built • These hardware modules need to interact with each other • A demonstrator system is a necessity • We should take advantage of any prototypes/demonstrators of the external systems with which we need to interact • Planning will be more accurate once we have well-defined responsibilities within L1Calo • Effort required for alternative firmware configurations and supporting modules must not be neglected • Even with a minimal programme of demonstrators and prototype modules there is much to design, build and test • Keep our eyes on the final goal — Phase 2 system in 2021 • Bonus if demonstrator can add functionality to live trigger before this • Must let this distract us from our primary goal Ian Brawn

More Related