1 / 25

Report of the _-----------_____ ‘Computing and Readout’ Working Group

Operations Robotics Readout. Robotics Readout. Report of the _-----------_____ ‘Computing and Readout’ Working Group. presented by Adrian Biland / ETH Zurich CTA Meeting Paris, 2. March 2007. Active WG Members. Angelo Antonelli (REM) Adrian Biland (MAGIC) Thomas Bretz (MAGIC)

daisy
Download Presentation

Report of the _-----------_____ ‘Computing and Readout’ Working Group

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Operations Robotics Readout Robotics Readout Report of the_-----------_____‘Computing and Readout’Working Group presented by Adrian Biland / ETH Zurich CTA Meeting Paris, 2. March 2007

  2. Active WG Members • Angelo Antonelli (REM) • Adrian Biland (MAGIC) • Thomas Bretz (MAGIC) • Stefano Covino (REM) • Andrii Neronov (ISDC) • Nicolas Produit (ISDC) • Ullrich Schwanke (HESS) • Christian Stegmann(HESS) • Roland Walter (ISDC) (several more on mailing list) • >50% not from • HESS / MAGIC !! • Lot of experience from • Cherenkov telescopes • Astronomy data • formats and interfaces • huge data rates • (accelerator experiments) exists internal draft of a TDR-like paper A.Biland, ETHZ IT working group report

  3. two fundamentally different approaches how to operate - collaboration mode (as in particle physics) - observatory mode (usual in astronomy) main differences (simplified, exist mixed modes): A.Biland, ETHZ IT working group report

  4. But usually experiment renowned not the PI: e.g. ‘HUBBLE found...’ A.Biland, ETHZ IT working group report

  5. doing physics Particle Physics: (usually) clean, well defined environment clean, self-consistent theories to check==>predictions dedicated, 'simple' experiments to test predictions doEXPERIMENTS ==> collaboration mode ok Astronomy: no control over environment and 'setup of experiment' no clean theory; fundamental stuff obscured by complicated standard-physics processes usually need many sources and/or many observatories (MWL) to get conclusive answers OBSERVATIONS ==> observatory mode, data mining (for MWL) A.Biland, ETHZ IT working group report

  6. doing physics What about CTA ? We are in a paradigm-shift: WHIPPLE, HEGRA, CAT, .... : 'source hunting' - proof of concept - invent new fundamental techniques - new source ==> new conference Collaboration mode is ok H.E.S.S., MAGIC, ...: 'proof VHE important part of Astronomy' - mature technology - surprising richness of VHE sky - impressive results; few conclusive answers - MWL becoming more and more important ===> must incorporate external experts Collaboration mode getting difficult (authorlists !!!) A.Biland, ETHZ IT working group report

  7. doing physics CTA: 'understand physics' (hopefully) - expect ~1000 sources (cannot use a PhD per source) - need automatic data (pre)processing - can do statistical analysis [ compare astrophysics: learned a lot from statistics of Hertzsprung-Russel Diagram ... ] - MWL becoming extremely important - for steady/periodic sources: data mining - for transients: additionally dedicated MWL campaigns Final goal: UNDERSTANDING PHYSICS ==> need to incorporate brightest minds (not all of them will be willing to dedicate themself several years to an experiment ...) A.Biland, ETHZ IT working group report

  8. doing physics ==> CTA better to be operated(part time) as Observatory(allow guest observers) and allow data mining(public data access) details to be discussed/agreed: - how much time for guest observers - how long delay until data become public domain can make the difference if CTA will be seen as obscure corner or major pillar of astronomy .... A.Biland, ETHZ IT working group report

  9. CTA as (open) Observatory • Need well defined procedure how to submit and process proposals • Existence and well defined access to data and analysis programs ... What is more efficient: centralized or de-centralized structures ??? A.Biland, ETHZ IT working group report

  10. ‘Array Control Center’ (onsite) tasks: - monitor operation of CTA (goal: automatic/robotic operation; but having >>10 (100?) telescopes, there will be hardware problems - ensure safety(nobody within array during operation; what about power failures at sunrise; ....) - buffer raw-date until shipped to data center(even in case of fast internet connection, we must foresee a buffer in case of problems) - monitor Quick-look analysis(onsite analysis) - .... Most can be done by local technicians (but if we want to send out alarms, need verification by experts) A.Biland, ETHZ IT working group report

  11. Operations (center?) - submission and handling of proposals - plan operation of CTA; scheduling - handle incoming ToO Alarms[GRB: directly to ArrayCtrl] - control operation of CTA(automatic/robotic operation ==> can also work if there is some downtime in communication) - control hardware status of CTA(slow control level) - ... Needs CTA hardware/physics experts (available/on call) (could be 12 timezones away ==> no night-shifts ?!) A.Biland, ETHZ IT working group report

  12. Data (center?) exist different possible implementations; extreme cases: - data-repository • ‘science support center’ For (open) Observatory: It is less important to have a place where disks are located, but to have a dedicated contact point for inexperienced users (to be defined on what ‘luxury level’ this has to be: users get raw data vs. users get ‘final plots’) A.Biland, ETHZ IT working group report

  13. Data (center?) - receive and store data from CTA - calibration (==> check hardware status) - automatic (pre)processing of data - archive data on different stages: raw, compressed, preprocessed, photon-list, ...(?) - ensure availability of data to predefined groups PI, CTA consortium, public domain - make available (and improve) standard analysis tools - offer data analysis training sessions/schools • ... Needs CTA Analysis Experts A.Biland, ETHZ IT working group report

  14. General Remark: CTA staffphysicists shall be allowed/ encouraged to spend reasonable part of time to do their own physics within CTA (as PI or CoI ...) A.Biland, ETHZ IT working group report

  15. Design Studies On next slides, topics for design studies are marked in blue A.Biland, ETHZ IT working group report

  16. Towards CTA Standard Analysis 0) adapt H.E.S.S./MAGIC software to analyze CTA MC for the Hardware Design Studies (partially done) 1) define needs -underlying data format (FITS, root, ...) -interfaces to external data (for MWL analysis) -what amount of RAW data must be archived for re-analysis (FADC slices? only Pixels above threshold?) [might have to archive several PBytes/year] - .... 2) tools survey (what exists and can be re-used) 3) start programing (rather early to be ready for first telescope!) package must be useable by non-experts ==> ‘KISS’ A.Biland, ETHZ IT working group report

  17. Towards MC Mass Production In hardware design studies (and during operation), need huge amount of MC ==> urgent -tools survey -optimize/speedup programs GRID: -exists (most probably mature soon) -EU spent €€€€€€€€ ==> success by definition -at the moment, huge amount of unused CPU available MC easiest to use GRID, can most profit from GRID ==> concentrate GRID activities in MC package Analysis software shall be GRID aware, but not rely on it A.Biland, ETHZ IT working group report

  18. Towards Array Ctrl: ‘robotic’ too complicated system to make fully robotic hardware must be very(!) reliable software must be rather simple (no bells and whistles...) limited experience, need test environment A.Biland, ETHZ IT working group report

  19. Towards Array Ctrl: Slow Ctrl -centralized approach: powerful central machine controls individual (dumb) telescopes; knows always everything about anything.... -distributed approach: each ‘intelligent’ telescope runs independent; Central Control just distributes tasks (e.g. schedule changes) and receives status info (but can always request any infos) -mixed mode Design study: find optimal solution ... A.Biland, ETHZ IT working group report

  20. Towards Array Ctrl: Trigger -most simple approach: -each telescope has its own local-trigger -multi-telescope trigger just combines next neighbors local-trigger information -very ambitious approach: combine adjacent telescopes on the pixel level (technically feasible ? any advantage ???) Design study: find optimal solution ... A.Biland, ETHZ IT working group report

  21. Towards Array Ctrl: DAQ -centralized approach: CC receives all raw-data and writes combined ‘CTA events’ containing several telescopes -distributed approach: each telescope writes its own data stream (including trigger-info) to local or central file server; combining data of adjacent telescopes done at analysis stage Design study: find optimal solution ... A.Biland, ETHZ IT working group report

  22. Towards Operation (center) probably only few CTA specific requirements ==> need only tools survey (and take decision) A.Biland, ETHZ IT working group report

  23. Towards Data (center) - define needs <== experts from Cherenkov and Astro - tools survey - use HEGRA data as playground to proof feasibility of approach possible extension: [midterm]: archive H.E.S.S./ MAGIC data in ‘universal’ format for extended tests [longterm]: allow data mining on old H.E.S.S./ MAGIC very political decisions...; important signal to astro-community... also analysis package should be tested on old data A.Biland, ETHZ IT working group report

  24. Towards Data (center) is the interface to the astro-community (data mining !) basic design might not be crucial for us, but it should ‘please’ the other users ... A.Biland, ETHZ IT working group report

  25. Summary -Exists lot of experience in all needed fields -Important to combine and synchronize knowledge -Several decisions to come out of design studies; find ‘optimal’ solutions (probably now show-stopper: ‘wrong’ solution will also work, but less efficient and more expensive) -Some tasks will need rather long time between design study and final product (e.g. full analysis package); to be ready in time, these design studies must be finished rather soon so that implementation can start ... ( e-center call in 2008 ???) A.Biland, ETHZ IT working group report

More Related