1 / 23

Performance of the DCS configuration database

Performance of the DCS configuration database. Organization of the Configuration Database. The ALICE Running mode (e.g. lead-lead, cosmics, p-p, etc.) is defined by the ECS The mode is sent to the DCS at every change A valid configuration must exist for every running mode

Download Presentation

Performance of the DCS configuration database

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Performance of the DCS configuration database

  2. Organization of the Configuration Database • The ALICE Running mode (e.g. lead-lead, cosmics, p-p, etc.) is defined by the ECS • The mode is sent to the DCS at every change • A valid configuration must exist for every running mode • The same configuration might be re-used for several modes • FERO Configuration: lookup tables define the relationship between the ALICE Running mode and the corresponding configuration version for each sub-detector

  3. Type of Run SPD Version SPD Version SDD Version SDD Version SPD Data SSD Version SSD Version SDD Data … SSD Data … … … TRD Version TRD Version TRD Data Hierarchical organization of DCS FERO configuration data Default Versions for Detector X Type of Run Detector version numer(4,0) numer(7,0) p-p xxxxx cosmics xxxxx cosmics xxxxx lead-lead xxxxx … xxxxx … xxxxx … xxxxx

  4. Separate tables are created for each detector • Data organization is hierarchical and follows the intrinsic detector segmentation • The “atomic” data records represent a group of parameters, which should be loaded together (e.g. chip settings, sector settings) • A new configuration data for any of the detector components results in a new configuration set for the whole detector • Hierarchical organization allows for re-using the information already stored in the database

  5. Configuration DB hierarchical organization Configuration Version Group 1 Version Group 1. Version … Sub-group 1 Version Sub-group 1 Version Group n Version … DATA Sub-group n Version Sub-group n Version DATA Group n Version Sub-group 1 Version Sub-group 1 Version … DATA Sub-group n Version Sub-group n Version DATA

  6. New configuration data Configuration DB example: SPD Configuration Version Side A Version Side A Version Side C Version Sector 0 Version Sector 0 Version … Half-Stave 0 Version … Sector 9 Version Half-Stave 0 Version … DATA Half-Stave 5 Version Side B Version Sector 0 Version … Sector 9 Version Half-Stave 5 Version DATA

  7. The data record will be optimized for each sub-detector • Two options for writing the data • Table containing individual parameters • Configuration data stored in a BLOB • It is the responsibility of detector groups to define the database schema and implement the client code in the FED server(s) • The DCS team will • Provide assistance (we can help you to create the schema and implement the client code) • Verify the schema (in collaboration with IT) • Install, operate and maintain the database

  8. Results of Oracle server tests • Configuration Data download: • Blobs • Big • Small

  9. Data Download from Oracle Server (BLOBs) • Large BLOBS were stored on the DB server connected to private DCS network (1Gbit/s server connection, 100Mbit/s client connection) • Retrieval rate of 150MB configuration BLOBs by 3 concurrent clients was measured to be: 3-11 MB/s/client • Upper limit of 11MB/s corresponds to client network connection • results depend on Oracle cache status, first retrieval is slower, succeeding access is faster • Depending on detector access patterns, the performance can be optimized by tuning the server’s cache

  10. Test With Small BLOBs • Test records consists of 10 Blobs with size 10kB each • 260 configuration records were retrieved per test • Allocation of BLOBs between configuration records altered • From random • To shared (certain BLOBs were re-used between configuration records) -> to test the ORACLE caching mechanism

  11. BLOB Retrieval Tests

  12. AMANDA • AMANDA is a PVSS-II manager which uses the PVSS API to access the PVSS archives • Developed in collaboration with the offline team • Archive architecture (files/RDB) is transparent to AMANDA • AMANDA can be used as an interface between the PVSS archive and non-PVSS clients • AMANDA returns data for requested time period

  13. AMANDA status • AMANDA version 4.0 released last week • Preliminary tests look promising (stable and smooth operation) • Currently being tested thoroughly by offline • Tool is ready for pre-installation

  14. Thank you for your attention

  15. Appendix

  16. Data Insertion Rate to the DB Server • Data was inserted to 2 column table (number(38), varchar2(128), no index) • Following results were obtained (inserting 107 rows into DB): • OCCI autocommit ~500 values/s • PL/SQL (bind variables) ~10000 values/s • PL/SQL (vararrays): >73000 rows/s (1 client) >42000 rows/s/client (2 concurrent clients) >25000 rows/s/client (3 concurrent clients)

  17. Data Download From Tables • The FERO data of one SPD readout chip consists of • 44 DAC settings/chip • There are in total 1200 chips used in the SPD • Mask matrix for 8192 pixels/chip • 15x120 front-end registers • The full SPD configuration can be loaded within ~3sec

  18. Operation of AMANDA

  19. AMANDA in distributed system • The PVSS can directly access only file-based data archives stored by its own data manager • In a distributed system also data produced by other PVSS can be accessed, if the connection via DIST manager exists • In case of the file-based archival DM of the remote system is always involved in the data transfer • In case of RDB archival, the DM can retrieve any data provided by other PVSS within the same distributed system without bothering other DMs • It is foreseen, that each PVSS system will run its own AMANDA • There will be at least one AMANDA per detector • Using more AMANDA servers overcomes some API limitations – some requests can be parallelized (clients need to know which sever has direct access to the requested data)

  20. AMANDA User Interface Manager User Interface Manager User Interface Manager AMANDA Client Control Manager API Manager Archive Manager(s) Archive Manager(s) Archive Manager Database Manager Event Manager AMANDA Server Driver Driver Driver PVSS-II Archive(s)

  21. UI UI UI UI UI UI UI UI UI CTR CTR CTR API API API DM DM DM EM EM EM AMD AMD AMD DRV DRV DRV DRV DRV DRV DRV DRV DRV PVSS-II PVSS-II PVSS-II AMANDA in the distributed environment (archiving to files) DIS DIS DIS

  22. UI UI UI UI UI UI UI UI UI CTR CTR CTR API API API DM DM DM EM EM EM AMD AMD AMD DRV DRV DRV DRV DRV DRV DRV DRV DRV PVSS-II PVSS-II PVSS-II AMANDA in the distributed environment (archiving to ORACLE) DIS ORACLE DIS DIS

  23. Results of Oracle server tests (Data download) • 150MB of configuration data, 3 concurrent clients, DCS Private 100Mbit/s network, 1Gbit switch: • Bfile ~10.59 MB/s • Blob ~10.40 MB/s • Blob, stream ~10.93 MB/s • Blob, ADO.NET ~10.10 MB/s • 150MB of configuration data, 1 concurrent client, CERN 10Mbit/s network: • Bfile ~0.81 MB/s • Blob ~0.78 MB/s • Blob, stream ~0.81 MB/s • Blob, ADO.NET ~0.77 MB/s

More Related