1 / 37

Slow Controls, Simulations and DAQ/Data Analysis— WBS 1.7

Slow Controls, Simulations and DAQ/Data Analysis— WBS 1.7. Jim Miller Boston University Chicago meeting, June 6-8, 2007. Outline. Subsystem Overview, WBS 1.7 Scope of Work Plans and organization Summary. Scope of Work. This WBS element has four independent parts:

sef
Download Presentation

Slow Controls, Simulations and DAQ/Data Analysis— WBS 1.7

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Slow Controls, Simulations and DAQ/Data Analysis—WBS 1.7 Jim Miller Boston University Chicago meeting, June 6-8, 2007

  2. Outline • Subsystem Overview, WBS 1.7 • Scope of Work • Plans and organization • Summary

  3. Scope of Work This WBS element has four independent parts: • Develop a reliable and redundant system to control the operation of the EDM apparatus; • Develop a data acquisition system for carrying out the EDM measurement program; • Coordinate simulations to model the apparatus and all aspects of the experiment; • Coordinate development of analysis codes to process large amounts of data.

  4. Planning Assumptions—Slow Controls • EPICS (Experimental Physics and Industrial Control System) will control and monitor the apparatus, and control the measurement cycle (change valves, HV, magnets etc..) • Anticipate up to 1000 read-out and control parameters; system must be distributed, reliable, capable of remote operation, and redundant (if the EPICS main/local control fails, hard-wired local control should be readily available) • Similar sub-systems for neutronics, cryogenics, 3He, inserts, and magnetic fields (N,C,H,I,M) • Utilize Unix/Linux, VME and VxWorks to capitalize on existing software libraries (Argonne light source web page is a repository of such libraries and a reference source for EPICS info, ) • Safety decisions should be hard-wired locally; monitored and alarmed but NOT controlled by EPICS (quench protection, HV breakdown, vacuum loss…)

  5. EPICS widely used – SNS, APS, JLAB Multi-headed: framework, protocol (Channel Access), tools, drivers Workstations: *Sun Hp DEC/Alpha Silicon Graphics * PC Macintosh OS: Unix, *Linux, *Windows Site LAN/WAN I/O Controllers: *VME,VXI PCI, Workstations OS: *VxWorks, Unix, Windows RTEMS, RTLinux, L4 linux Remote and Local I/O Buses: Control Net, PCI, CAN-Bus, *Industry Pack, *VME, xVXI,x ISA, CAMAC, *GPIB, Profibus, Bitbus, *Serial, *Allen-Bradley, Modbus, Yokogawa, G-3, USB *Ethernet/IP Field I/O Field I/O Field I/O Field I/O Courtesy Bob Dalesio (APS)

  6. Likely EPICS slow control baseline • Choose VME-based : reliable and widely used • Choose VxWorks- real-time operating system • VxWorks software development should be done on Sun-Solaris • VxWorks-based applications downloaded to local computers (Input/Output Controllers, or IOC) in each VME crate • Local control devices are mounted in VME crates • Ideally, each sub-system would have a VME crate, IOC, etc., and would develop the slow controls with EPICS from day 1. • A sub-committee of EPICS experts is being formed to help collaborators select control devices and to assist in EPICS setup. Please check with this group before purchasing control equipment.

  7. EPICS- basic example Suppose that you want to read the voltage from a thermistor • A VME module is purchased which senses voltages. • Or more commonly, a separate module reads the voltage, digitizes it, then sends the data via GPIB, serial, etc. to an interface module in the VME crate. Either way, we strongly encourage using modules which have EPICS drivers which already exist. (information on supported modules is available at the Argonne Light Source web site: www.aps.anl.gov/epics) • A VME Input/Output Controller (IOC) (a computer mounted in the VME crate running the VxWorks operating system) reads data in real time. • The datum is then transferred to your PC (LINUX, Windows,…) for processing. • Simple input or output commands • Mouse-driven PC display which has menus allowing read/write of variables, strip charts, monitoring, etc. • The particular variable must have a unique name, up to 26 characters, since your PC has access to all variables on the CA (Channel Access) • Naming scheme example- cryo:read:upper2:temp:F or similar .

  8. EPICS CODA VME Crate Single Board Computer (SBC, PPC) Digital to Analog (DAC), etc to device controllers (runs VxWorks) Analog to Digital (ADCs), TDC, scalers, etc from detectors LAN LAN IO Modules + SBC + LAN = Input/Output Controller (IOC) This link needs to be developed Operator Interface (OPI), Linux Host Computer CODA HOST, Linux Host Computer • EPICS Core • IOC Database Management • various development toolkits • Software device drivers, etc • Motif Tools • CODA Core (coda_roc) • Database Management (SQL) • Software device drivers, etc • Tcl/Tk tools We need to be mindful of electrical noise causing problems with SQUIDS

  9. EPICS test bench setup at TUNL- M. Ahmed, C. Swank, C .Taylor, S Hartman Current Integrator VMIVME4132 DAC VME Crate MVME-5100, SBC Alarms, LEDs, etc

  10. EPICS control of the test benchdevices

  11. Slow Controls R&D, Development of an EPICS based system for nEDM Mohammad Ahmed, Duke University Work at TUNL is focused on developing a prototype EPICS System for the control of R&D Cryo System. The EPICS development is being done by Mohammad Ahmed (Duke) and the cryo system development is being coordinated with David Haase and the NC-State Group. • Outline of EPICS R&D • Development of slow controls Infra-structure • Hierarchy of Controls and Menus • Nomenclature of IOCs, Channels, etc • Development of an outline on standards of hardware purchases to: • maximize compatibility of hardware across groups • reduce software development • ensure cost effectiveness of controls (minimize cost/control channel) • Development of a prototype system which can be used as an example

  12. Hardware and Software What have we acquired ? • VxWorks ( Mohammad Ahmed got VxWorks via • educational grant of the software) • VME Crate • Single Board Computer • Analog Output Controller What is next ? • Lakeshore Temperature Monitor • VME Analog Input • USB Controller (computer) • Temperature Sensors

  13. Software/GUI development We are in the process of making an nEDM distribution of EPICS with standard set of extensions EPICS supports all sorts of mouse-driven control screens, e.g Motif, tcl/tk- many modern ones to choose from- perhaps the simplest these days would be web-based? A prototype nEDM Main control window

  14. EPICS Controls For R&D Cryo Read out will be implemented with Lakeshore controller

  15. Planning Assumptions – DAQ • Neutron-3He capture event rate ~ 1kHz , background (b and g) rate comparable to event rate. • Require pmt coincidence trigger to suppress background events in the light guides • Digitize prompt and afterpulse pmt waveforms for ~ few msec to suppress background events. Issue: how much zero suppression in waveform data stream will be possible? • Digitize SQUID voltages at 1 ms intervals (3He precession rate ~10-30 Hz) • Use VME and CODA – developed and supported by JLAB, used at TUNL, …. • Issue: need to construct the pipeline which carries EPICS data into the CODA DAQ stream

  16. Sample event from HMI test setup

  17. Afterpulsing Is a Signature of Good Events Mike Hayden et al LT12

  18. Schematic DAQ Also need to add a route for EPICS slow control info to ge to DAQ stream

  19. A Note on R&D for the DAQ development TUNL has full CODA capability. Complete hardware and software Infra-structure exists to do DAQ development. Mohammad Ahmed is a co-lead on CODA based DAQ at TUNL IDEA  Use Flash ADCs to transient digitize the PMT signal. Additional ADC/QDC/TDCs can be made part of the readout. New issue: Do we need to develop FADCs which are not noisy, and which sit near the PMTs? Where do we stand on DAQ R&D for nEDM ? • We have a VME crate, • We have single board computer, • We have a FADC (100 MHz), • We have ADC/QDC/TDC, • We have Scalers • Noise levels should be measured

  20. A Note on R&D for the DAQ development Software development is being carried out for the FADC. The plan is to readout sample data using the FADC. What is State-of-the-Art available in the Market for FADC ? Struck SIS3350 (VME-based) • 4-channels • 500 MHz • 12-bit resolution • 512 MSamples/Channel • internal/external clocks Getting close to what we want ? Fine if we have <20 channels or so. May need custom if we need a large number of channels. The goal is to do a trigger based, deadtime-free acquisition of multi-event signals.

  21. Planning Assumptions—Data Analysis • Expect 10 GB of waveform data per 1000 sec measurement cycle (10 kbytes of data per event, event rate of 1kHz). At 30% live time ~ 100 TB per year. Potentially large error on this number- depends on the number of PMTs, number of bits in WFD (depends on dynamic range needed, need 12 bits?), sampling rate of WFD (depends on width of pulses), and the decision on the permissible extent of zero suppression. • Need fast analysis for diagnostic monitoring during commissioning- An online version to monitor system functionality, offline version for detailed analysis • Need reliable data compaction methods as experiment moves into production mode. • Slow control data rate is manageable: ~2kB at 1Hz ~ 60GB per year • Establish programming standards to facilitate independent analyses of the data by collaboration members.

  22. Major Planning Assumptions—Simulations Work is carried out by WBS system groups: • beam transport into the measurement cell* • How many neutrons get there, what is their momentum, spin, position? Where are they lost along the way? How much background do they produce? • 3He preparation, injection and removal, • magnetic and electric field configurations*, • neutron cell dynamics (momentum and spin motions) and n- 3He spin interaction • experiment cycle optimization, • scintillation light production and propagation* (* potentially impacts design)

  23. General simulation package • ~3 or 4 major pieces • UCN in cells • N beam flux • Light production and collection in the cells • He3 cycle and behavior in cell • Geometries defined or at least compatible as much as possible with GEANT4 • Use ROOT-based analysis • C++, C, Fortran • Begin with • central cell simulation w/ interactions with walls, electrodes, magnets… • flexibility to study different geometries and materials: database driven so that geometries can be modified by changing parameters rather than re-coding • model where neutrons go en-route to the cell, how many stop in the cell, how many pass through the cell, where they get captured and cause background, etc. • Model the neutron and 3He trajectories, spin motions in the E and B fields of the cell, study geometric phase effects, spin coherence times, other systematic issues. • Model capture process: neutrons on 3He (probably not available in GEANT) • Model light collection (get neutron capture coordinates from central cell simulation, return distribution of PMT responses) (questionable in GEANT) • Simulate effects of cosmics, other backgrounds, effectiveness of suppression via detection of after-pulses • Once backgrounds are known, optimize the cycle times, estimate nEDM sensitivity • Effects on B field due to current from HV discharges, stray B-fields,… • Build in extensive fail code bits to keep track of where and why particles are lost, failure modes, etc.

  24. Simulations- continued • Goal: make simulations as compatible as possible, distribute the tasks, and build a more global simulation package which is available for general use • We will need a bank of CPUs for simulations, and later for data analysis • A simulations committee will be formed, with knowledgeable members from each subsystem, to coordinate simulations, establish standards • Draft decision: ROOT and GEANT4 compatibility. Output formats compatible. Common data base and naming scheme for variables where possible. • Full simulation should be available to deliver ‘real’ data in ‘real’ format for playback in the data analysis. • Likely scenario: we don’t know final DAQ format yet; set up a flexible intermediate format agreed upon by offline analysis and simulations. Later, convert real data to this format. In practice:there is no offline team yet, simulations team will try to put something together with offline in mind. • Need volunteers from each subsystem! • We also need a brief summary from each group describing what simulations they have done, are working on, or are planning. • Maintain a catalogue of online notes

  25. A few examples of simulations • Betsy Beise has an undergraduate (Joshua Rehak) who will get a lightguide program (LITRANI) running to evaluate light collection in the proposed light guides over the summer (already have design from T. Ito, expect alternative geometry in a few weeks from L. Roberts) • Bob Golub plans to study n, He3 spin evolution, geometric phases, etc., and would welcome student help • The Caltech group has been studying spin coherence times, trajectories, geometric phase effects, etc. in addition to B field studies. • Neutron transport simulations at Kentucky seem to be well-along; we need to get the output parameters of neutrons for input to central cell, also distribution of where neutrons were lost in order to do background studies. • Goal: build a team to create the central cell simulation in GEANT4 and set up interfaces to other system simulations. BU postdoc Vanya Logashenko will begin the process

  26. Project Management

  27. Summary • Slow Control • VME-based, EPICS compatible modules, using VxWorks in the VME computer • A small committee of experts should be consulted for purchases, setup of local systems • DAQ • VME-based CODA, which uses VxWorks OS (unless VME is found to be too noisy for the SQUIDS!) • FADCs for PMT readout • Slow control data incorporated into DAQ data • Simulations • GEANT4, ROOT as much as possible • Develop a GEANT4 model of central cell, take input neutrons from transport simulations, study spin evolution, particle diffusion, geometric effects, effects of leakage currents,… • Put together a group of simulation experts from each sub-system which will decide standards and work toward compatible code Please volunteer

  28. Technical Interfaces • N,C,H,I,M groups will select hardware compatible as much as possible with existing EPICS software, in consultation with the controls committee • N,C,H,I,M groups can choose how they want to control their equipment during initial testing, but will need to coordinate closely with controls group to insure compatibility at the integration stage • Preference: Control systems can be delivered to the N,C,H,I, groups if they prefer to use EPICS at the outset but funds may not be there yet to support this (neutronics, cryogenics, 3He, inserts, and magnetic fields = N,C,H,I,M)

  29. Initial Risk Analysis Note: Additional Data available in EDM Risk Management Plan

  30. Estimated Costs ODC: other direct costs MP: Major purchases Cont.: Initial Contingency Est. TEC: total estimated costs * Costs do not include escalation

  31. Preliminary BA Schedule - Electronics

  32. Deliverables • Subsystem deliverables include: • Five EPICS VME control systems for sub system groups • Integrated EPICS control system at the FNPB • CODA based DAQ and local analysis computer systems • Simulation results that impact design of apparatus • Simulations that model analysis of the data stream from the DAQ

  33. Summary • Subsystem Overview • Scope of Work • Work Packages • Technical Discussion • Project Management • Technical Interfaces • Risk Analysis • Estimated Costs • Preliminary Schedule • Deliverables

  34. Supplementary Slides

  35. Major Procurements and Total Cost All equipment is standard off-the-shelf hardware: • VME systems for slow controls (crates, controllers, ADC, DAC, stepper motors, scalers, timers, computers, VxWorks..) ~$226K • VME system for DAQ (Waveform digitizers, SQUID readouts, MCA’s, discriminators, logic modules, computers…) ~$82K • Total unburdened hardware = $308K • Total labor (technician and software engineer): $101K • Total including contingency: $457K

  36. STEP file input from CAD to GEANT4 There is NIST-developed code for reading "AP203" STEP files in GEANT through versions 4.5.x. Starting with 4.6 support for STEP files was removed and no longer supported. Working now to implement a trial conversion of the central region of the apparatus.

  37. Work Packages

More Related