1 / 27

The HERA-B detector The database problem The Architecture The Berkeley-DB DBMS

The HERA-B database services detector configuration, calibration, alignment, slow control, data classification. A. Amorim, Vasco Amaral, Umberto Marconi, Tome Pessegueiro, Stefan Steinbeck, Antonio Tome, Vicenzo Vagnoni and Helmut Wolters. The HERA-B detector The database problem

scott
Download Presentation

The HERA-B detector The database problem The Architecture The Berkeley-DB DBMS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The HERA-B database servicesdetector configuration, calibration, alignment, slow control, data classification A. Amorim A. Amorim, Vasco Amaral, Umberto Marconi, Tome Pessegueiro, Stefan Steinbeck, Antonio Tome, Vicenzo Vagnoni and Helmut Wolters • The HERA-B detector • The database problem • The Architecture • The Berkeley-DB DBMS • The client/server integration • The domains and solutions • Conclusions and Outlook

  2. B/B-tagging B0/B0 J/y KS HERA-B Experiment A. Amorim Vertex Detector Si strip 12 mm resolution MUON (m/h) tube,pad and gas pixel chambers RICH (p/K) multianode PMT ECAL(g+e/h) W/Pb scintillator shashlik TRD (e/h) straw tubes +thin fibers C4F10 HiPt trigger pad/gas pixel Magnet: 2 Tm Tracking: - ITR(<20cm): MSGC-GEM - OTR(>20cm): 5+10mm drift cells

  3. The HERA-B detector A. Amorim RICH SVD (not visible) ECAL Muon 1,3,4 Magnet TRD OTR ITR- OTR chambers

  4. The main challenge: Selecting A. Amorim

  5. How do we select them? A. Amorim Time scale 4 s 5ms 200ms 10ms Pretrigger: ECAL, m System, p T pads L1: e/m “Tracking” in 4 SL, p T cut, mass cut L3: full track & vertex fit, +SVD tracks, p. id 1/200 1/100 TAPE 1/10 1/2.5 L2: + drift times, magnet traversal, vertexing L4: + full reconstruction, physics selection Input rate 20 Hz 10MHz 50 kHz 500 Hz 50 Hz

  6. HERA-B DAQ Detector Front End Electronics A. Amorim FCS 1000 SHARC (DSP) Event Control DSP SWITCH DSP SWITCH Trigger PC Trigger PC Trigger PC SLT/TLT INTERNET SWITCH 4LT PC 4LT PC 4LT Logger PC L2-farm: 240 PC’s L4-farm 200 PC’s

  7. To provide persistence services (including online-offline replication) to: • Detector configuration • Common accepted schema • Calibration and alignment • Distributing information to the reconstruction and trigger farms • Associate each event with the corresponding database information • Slow control • Manage updates without data redundancy • Data set and event classification • Online Bookkeeping • Detector Configuration • Calibration and Alignment • Slow Control • Data Set and Event Classification • Online Bookkeeping The HERA-B database problem A. Amorim

  8. Characterizing the context A. Amorim

  9. Querying on time intervals A. Amorim one can select objects by the values of their attributes Most of our requests want to query on time or version exception: Event Tag Database: select on (Part Type, E, Pt, etc). Query on time => Object(Time) or Object(t1,t2) Our simple database layer on top of Berkeley DB provides that otherwise one has to specialize the DBMS ( example the conditions database of BaBar and R&D45) t

  10. Key= name+ version Machine independent blub of DATA /PM/ Descrip. field1 ; field 2; ... Db: /RICH/HV/ .2 .5 -.1 56892 ... versions Keys, objects and client/server A. Amorim client/server at the SDB level +RPM -> an UDP based communication package.

  11. The Berkeley DB A. Amorim See http://www.sleepycat.com/ • Embedded transactional store with: logging, locking, commit and roll back, disaster recovery. • Intended for: high-concurrency read-write workloads, transactions and recoverability. • Cursors to speed access from many clients.. • Open Source policyThe license is free for non-commercial purposes - rather nice support • No client/server support is provided

  12. Slow Control Interface A. Amorim Metadata Object Data Object Update Update Pmt1000 Pmt1003 Pmt2000 1.2 ... 1 ... 1.5 ... 2 ... 1.6 ... 2.3 ... time Optimized Queries

  13. Associations to Events A. Amorim ... ... ... Index Obj. Index Obj. Index Obj. Revision 0 - online Calibrating 1 - offline Index Obj. Active server interface Index Objects (referenced by events) Client/server Dynamic Associations Index Obj. Created in active Servers

  14. The index API design A. Amorim Index objects can associate also to transaction objects which are not data but are associated to sets of data objects that must be considered together. Tools were also developed

  15. The Parameter Distribution (cont.) A. Amorim

  16. A. Amorim

  17. key’ Keykey’ Key Basic n-n associations (LEDA) A. Amorim • Associations are navigated with iterators • Using hash tables. • Keys as OID’s with the scope of classes. • Explicitly loaded or saved (as containers) LEDA - Object Manager (hash table implemented associations) Active server interface Key objects (referenced by events) Client/server

  18. GUI for editing and drawing A. Amorim From R&D: JAVA, TCL/TK, gtk Reusing and extending widget. Data hidden from TCL/TK ROOT database Binding Socket: Client/ Server

  19. General Architecture 109 Evt./y A. Amorim

  20. The Cache Server A. Amorim WAN TCP/IP gateway User client Cache Cache Db- server User client User client Db- server Cache Memory

  21. The Replication Mechanism A. Amorim firewall ONLINE OFFLINE Offline DB server DB server imported Incremental dump files Send to tape DB server Offline DB server

  22. DAQ configuration A. Amorim

  23. DAQ (Software Components) A. Amorim

  24. VDS databases A. Amorim

  25. Maintaining the system A. Amorim • A slow control process is permanently checking the state of the database servers. • It issues alarms for the detector shift crew • Tools to start and stop the dynamic configuration of database servers are to be used by a set of experts. • The configuration and startup of the distributed database server system is performed using a special configuration database for this system.

  26. Conclusions A. Amorim • ONLINE: • Large number of Clients => Gigabytes per Update • broadcast simultaneously to SLT • tree of cache database servers to the 4LT • Correlates (dynamically) each event with the databases objects • 600 k SLC parameters using data and update objects • parameter history is re-clustered on the database servers • The online database system has been successfully commissioned • OFFLINE: • Replication mechanism decouples online from offline • also provides incremental backup of the data • TCP/IP gateways and proxies • “data warehousing” for data-set classification -> MySQL • Relation to event tag under evaluation • Also providing persistency to ROOT objects • Using Open Source external packages has been extremely useful.

  27. Future directions ... A. Amorim One must have a plan even if it is a wrong one ... Berkeley DB MySQL Client Client Client Persistent State Service PSS Persistent State Service PSS CORBA Client Client (ORBacus) Open Communications Interface (OCI) IIOP UDP/ based Farms WAN

More Related