1 / 32

IRMIS Overview

IRMIS Overview. Don Dohan EPICS Collaboration Meeting BNL Oct 15, 2010. General Comments:. IRMIS collaboration workshops/meetings (APS) : very useful to me to learn from other lab experience and requirements/use cases.

sellersp
Download Presentation

IRMIS Overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IRMIS Overview Don Dohan EPICS Collaboration Meeting BNL Oct 15, 2010

  2. General Comments: • IRMIS collaboration workshops/meetings (APS): • very useful to me to learn from other lab experience and requirements/use cases. • both ways - some of the IRMIS concepts have shown up in other laboratory RDBs. I guess this is the point of these meetings • Question: Can we do more than just cross-fertilize “ideas”? • “I like your idea – can I get your schema and code (and will it work at my place)?” • Some success. The IRMIS PV schema and crawler have been deployed at a large number of laboratories. There have been crawler contributions from a number of different laboratories. A collaboration success. • IRMIS database schema (ERD) –vs- application layer is obfuscated in this talk.

  3. Ben Franksen: Calling an iocsh "sub-script” thread on tech-talk, Oct. 13, 2010: “Reverse engineering installed files is a simple and clever solution to the practical problem of finding out what is actually installed. But *of course* it is liable to break whenever the *structure* of the installed stuff changes. The obvious solution to the dilemma is to create a central service that serves as source to both the installed files *and* the big picture (e.g. Irmis) and presumably other stuff like alarm handlers etc. But how can we design something like that while still retaining the easy editability (and the ability to use version control systems) that we have with plain text files? This is the million dollar question we have been discussing at BESSY ever since it became clear that a growing number of Oracle database tables (with a specific set of tables for each kind of device/application) is *not* an easily editable and maintainable solution in the long run.”

  4. This is the point where we get 80% coverage for 20% of the effort. We only have the 20% budget. How do pick our 80%? Effort beyond this region is not justified: -- deliberate cut-off in coverage Cost is not only in data entry – there is a maintenance cost in data validation and test. Without this validation, the RDB loses its credibility as the ‘master store’ of critical operational data. Approach: domain separation - simplify application development - graded validation requirements Approach: - key/value support for ‘special cases’

  5. IRMIS Domain/Coverage: • Software: • EPICS software databases • global instantiated view, channel access clients (ALH, SNL, SDDS, MEDM, EDM…) • EPICS configuration history • Hardware • Component Assembly/configuration management • Cabling • Component inventory, component history • Magnetic field measurements • Lattice • Logscore • general service configuration • PV Meta • E-log

  6. PERL crawler • PERL: • epics config software • CA clients • history

  7. PERL crawler • PERL: • epics config software • CA clients • history • DSL: • component types • cmpnt config (installation) • cables, tray partitions DSL

  8. IRMIS Hardware Domain: Components • What is a (hardware) component? • original motivation: something that has EPICS device support • this did not address the vast number of infrastructure components (crates, racks, cpus..) • successive partitioning of the facility  arrive at ‘replaceable unit’ • IO card, chassis, magnet, rack, power supply, COTS.. • familiar day-to-day items:(good ‘buy-in’) • system partitioning promotes more complete coverage • more primitive granularity than a ‘device’, which may contain many cmpnts • do not assign a high level physics ‘role’ to a component • less subjective – no naming convention issue • a component definition is influenced by how it is assembled, as well as how it functions • a component may be a ‘soft’ entity: link, frame, sequence, softIOC, etc • how do we capture the relationships between the components that make up a system?

  9. CL1G2C01A (Weiming) CXL1G2C01A CYL1G2C01A

  10. IRMIS component assemblies Define IRMIS component types that describe the internal contents of a magnet assembly: These sub-components are ‘housed’ in the parent FLG1C01A. This allows the explict x and y PVs to point to a discrete vertical/horizontal component. This is what you purchase and install Int_Corr_X 29.38 CFXL1G1C01A Int_Corr_Y 29.38 CFYL1G1C01A These are what you model, cable and control Allows for separate effective location and effective length of the internal entities. Implications on Weiming Guo file? Naming convention?

  11. IRMIS2 -> IRMIS3 Inventory Domain: Relaxed validation Install Domain: Operations critical Strict validation

  12. IRMIS Cabling: Strongly leverages off the ‘install’ domain

  13. New NSLS2 IRMIS Requirements Recent flood of IRMIS requirements – mostly from accelerator physics - requirements not formalized. They just keep on rolling in, and expand immediately after deployment  scope creep - no physics results storage requirements (yet) Development window is rapidly disappearing - components are beginning to arrive, and are being tested (on-site and at vendor site)  need a rapid prototyping environment

  14. PERL crawler Rapid Prototype • RPE: • lattice • cmpnt inventory • cmpnt history • magnet measurements • traveler • elog • pv_meta • service config (alh, ar, score) • logscore • cmpnt channels • PERL: • epics config software • CA clients • history • DSL: • component types • cmpnt config (installation) • cables, tray partitions DSL

  15. IRMIS3 Rapid Prototyping Environment Browser App PERL IOC Crawler PERL DBI IRMIS 3 Rapid Proto HLA python Python IRMIS Access Library: • - Server side Python for web applications • Same API for physics applications • - provides basis requirement documentation set for porting to Data Services Layer (Phase Two)

  16. Magnet Measurement Workflow A.Jainanalyis raw meas. selection criteria for best measurement set (choice of transfer functions, etc) off-line storage: URL Recent request for on-line storage: - do analysis directly from IRMIS - provide linear profiles, study magnet edge effects. -- requirements creep XFER_function, an,bn, etc (for a given P/S strength) fit vs current IRMIS:mag_meas Kn vs I IRMIS:cmpnt_prop

  17. Magnet measurement extension to cmpnt schema Unit Conversion algorithm stored as cmpnt property

  18. Logscore • Python API developed during H. Sako visit from J-Parc. • Python code to store Logscore data in IRMIS - tested • Logscore app configured from application config tables* • To be used in the general accelerator physics python API to retrieve previous working point lattice field strengths. • Under development – convert model strengths <> stored SetPoint values using the Unit Conversions Service. • * J. Rock for SLAC capture of service config files (alh, ar, etc)

  19. Lattice RDB Support – Work in Progress • The IRMIS lattice API provides the accelerator physicist with the ability to generate a (nested set of) element sequence(s) for use in high level physics applications • The geometry (alignment, effective length, distance from the injection point) is obtained from the ‘install’ database. • Magnet strengths are obtained from the ‘logscore’ tables or directly from the machine (using the same config as logscore) • Magnet transfer functions are obtained from the magnet measurement database (part of the ‘cmpnt’ inventory database). • The element table defines logical components (MARK, DRIFT, …) that are not ‘installed’ components. • Unit conversion (magnet strength < -- > setpoint settings) are captured as python eval expressions, stored as ‘cmpnt properties’ • Generators for MAD, Elegant, Tracy, etc input decks directly from IRMIS have been developed. Needs update for recent schema refactoring.

  20. PV_Meta– Work in Progress • The PV_Meta schema is where Process Variables may be documented more fully than in the DTYP and DESC fields that are provided in EPICS. • The main thrust of the schema is to provide the connection between hardware and software – this is an association table between the PV and the hardware of interest to the PV (including the infrastructure where required). This is the elusive “cloud” project at APS. • PV relationships are captured – e.g. SetPoint/ReadBack pairs • PV aliases are captured • Python script to populate PV_Meta from the install database (Weiming Guo file) and the PV list from G. Shen’s Virtual IOC has been written. • GUI under construction

  21. Elog – overview • Motivation derived from NSLS2 retreat. • an elog system consists of a log entry, and a context. • elog message streams (next slide) – different “elog-books” with differing security requirements • context: • elog thread, time of entry • component id • install id • machine status, machine mode

  22. Elog – requirements • log stream types: • operations log – strong security/audit trail requirements • attach machine context (machine mode, fill pattern, etc) • commissioning/physics log. Narrative, accompanied by screen shots • attach css config meta-data? • engineering logs • engineering notes pertinent to component instances. Needed now. • control logs • component history • hooks to ticketing system (Artemis)? reduce redundant data entry (e.g. operations log and ticketing system

  23. E-log – work-in-progress • Schema based on the RHIC e-log • narrative of commissioning, operations, diagnosis • strong connection with the RHIC archiving system • Extended to include “IRMIS context” • ‘cmpnt_id’, ‘install_id’, css-config, … • Capture magnet (and other instrument) measurement notes • Basis for, or hooks to component history • Python library under development – but -> • ** working with E. Berryman: FRIB standalone elog with hooks to IRMIS looks very promising.

  24. Performance • Policy: do as much processing as possible on the RDB server, and ship the minimum possible over the wire. Leverage server side Python. • Potential speed enhancements: • satellite/cluster IRMIS servers for fast/slow queries. Load balancing • use in-memory relational tables for all but the EPICS databases • stored procedures for multiple turn-around queries • e.g. determining the full ancestry of a leaf component • -- disadvantage is the this is not portable between RDBMSs. • query tuning. Depends on RDBMS, so requires site specific tuning. • Tune optimization pretty much eliminates ORM (object relational) mapping.

  25. Work in Progress – Wish List • Python alignment capture into IRMIS – work with alignment group • propose capturing the distance of the centre of a component relative to the centre of its containing component. • API to provide conversion between this and other coordinate systems (distance from injection point, GPS, etc) • Service configurations files. Extend to HLA config files? (eg logscore). J. Rock/SLAC • Component history • Third party? needs hooks to IRMIS. Resume requirements gathering • Ticketing system – third party? needs hooks to elog, IRMIS • Installation history • Schema for PLC code generation • Document capture. R. Tanner/CLS

  26. Not done – and no resources to do it • Prescriptive PV.. • “But how can we design something like that while still retaining the easy editability (and the ability to use version control systems) that we have with plain text files? B. Franksen tech-talk • Travelers. Document attached to a manufactured item containing various check lists and performance criteria. Potentially useful in selecting and grading item instances that best match the lattice requirements. • Cmpnt channels. Original idea was to associate a cmpnt “channel” with an IO PV. Control system now dominated by “UPDs” – user programmable devices (PLCs, FPGAs, intelligent Network Attached Device) that do not export a fixed set of channels. Effort has been halted at this point.

  27. Controls PERL crawler Rapid Prototype Physics • RPE: • lattice • cmpnt inventory • cmpnt history • magnet measurements • traveler • elog • pv_meta • service config (alh, ar, score) • logscore • cmpnt channels • PERL: • epics config software • CA clients • history • DSL: • component types • cmpnt config (installation) • cables, tray partitions DSL Engineering

More Related