1 / 12

Ideas about Tests and Sequencing

Ideas about Tests and Sequencing. 3rd March 2001. C .N .P .Gee Rutherford Appleton Laboratory. Testing Overview. There are many different tests, separate but overlapping. We need to start making a complete list. When to do which test depends on when the hardware arrives.

carlo
Download Presentation

Ideas about Tests and Sequencing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ideas about Tests and Sequencing 3rd March 2001 C .N .P .Gee Rutherford Appleton Laboratory

  2. Testing Overview • There are many different tests, separate but overlapping. We need to start making a complete list. • When to do which test depends on when the hardware arrives. • Comments on Module vs. System tests. • Connectivity - are modules correctly and reliably linked • Algorithms - Analogue and Digital • Timing stability and margins • Cross-talk and low-rate error checks (Soak Tests) • Rate Limits • Alternate Operating Modes and Fault Detection/Recovery • System Stability and Behaviour.

  3. Comments on Module vs System Tests • We will test: • individual modules as they are built; • then small groups of modules, leading to full PPr sub-system, full CP sub-system, and full JE sub-system • building up to the complete slice and perhaps ROIB and/or ROS. • Even with fully-tested modules, we need the capability to repeat module tests at will for groups of modules and complete slice. • The data-related TESTS and the TEST VECTORS need to be structured to fit together into a hierarchy. “Integrated test suite”. • We don’t know how to do this yet, but it could save a lot of time and effort.

  4. Connectivity • This section verifies that all cables and links join correctly: • Generate bit patterns (e.g. a ramp) at the input to each backplane or cable link, check that correct data is received. • Rough timing needs to be done at the same time. • Links should be driven one at a time, so that swapped cables are quickly found. • Generate VME commands to each module in turn and verify (by eye) that the correct module is selected. • Check that individually addressed TTC commands are received where expected • Repeat for DCS commands.

  5. Algorithms - Analogue • We rely on correct analogue processing. Test: • Analogue connectivity (separate problem from digital system). • Small, medium and large pulses (Saturation); • Range of pulse widths; • Ability to capture pulse regardless of analogue timing; • Automatic procedure to calibrate timing and BCID coefficients; • Automatic procedure for LUT set-up; • Proper operation of BCID; • Ability to handle out-of-spec pulse shapes. • Conformity of readout with applied analogue input (resolution).

  6. Algorithms - Digital • Already tested by module engineers. • Testing adder trees (PPr towers, CPM hits, JEM energy and hits, CMMs) is easy, as described for CMM. • Testing adder trees with thresholds is more boring - you can’t see the sum, only the output from the thresholding stage. • Need to scan threshold value to determine adder content. • Must be done separately for each comparator where several are in parallel. • Test vectors must be able to specify required threshold settings.

  7. Timing Margins and Stability • Measure the width of timing windows (by changing TTCrx delays on modules). Are they all similar and close to predictions? • LVDS data from PPr into CPMs and JEMs; • CPM/JEM backplane data into CMMs; • CMM - CMM and CMM - CTP cable links • Input links to RODs. • How sensitive are timings to temperature or crate supplies? • How closely aligned do PPrs, CPMs, JEMs, CMMs need to be? • Using external TTC clock, what are the upper/lower system frequencies?

  8. Cross-talk and Soak Tests • Cross-talk is hopefully small, so combine with soak tests. • Excite signal lines singly, and look on physically adjacent lines for non-zero data. Long and Tedious!! • Short and Long Backplane links; • Connectors; • Cable links; • Between adjacent modules (e.g. through power supplies); • Digital - Analogue pickup. • Include some long tests to match previous LVDS link tests. • Include physics analogue data.

  9. Rate Limits • Check that all parts of the trigger run up to the specified limits: • Able to sustain 100kHz L1A with no readout data loss; • Able to handle multiple close L1As (8 in 40 bunches, then pause); • Apply and check readout back-pressure operation; • Run RODs at data limits for different numbers of slices or compression options; • Check large number of non-zero channels (as in calibration run).

  10. Alternate Operating Modes and Fault Detection • Can we run with BCID off for individual channels? • Can we simulate all error sources for checking? • Can we really run with links with errors or loss of link? • Are all error sources detected and counted so that they can be isolated. Can all be disabled? • How long does it take to start from cold, start a new run, etc.

  11. System Stability and Behaviour • It must be possible to reconfigure the system easily (e.g. change cables). • Can the system (trigger plus supporting computers and software) run for long periods (e.g. weekends) reliably? • Is system start-up/shutdown and run start/stop repeatable? • Is there a routine timing set-up procedure, including setting up wrt the LHC bunch 0? • How does it behave if there are bad channels?

  12. The End • Thank you

More Related