1 / 19

Diamon/Laser Design proposal

Diamon/Laser Design proposal. Marek Misiowiec BE/CO/AP May 20 1 0. Core Overview. Multi-tier, distributed, fail-safe system processing diagnostic data collected repeatedly from a variety of sources . Raw data of different shapes is transformed into unified formats and further

loring
Download Presentation

Diamon/Laser Design proposal

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Diamon/Laser Design proposal Marek Misiowiec BE/CO/AP May 2010

  2. Core Overview Multi-tier, distributed, fail-safe system processing diagnostic data collected repeatedly from a variety of sources. Raw data of different shapes is transformed into unified formats and further analyzed on a middle layer according to well-defined rules. Actions may be taken, in form of alarms or monitoring information delivered to clients. Processed data is made available as JAPC parameters or for dedicated GUIs. It is stored in long term archive too. Downward communication with the source devices is possible. apps raw data unifieddata client view DAQ SRV clients devs 2

  3. Core Overview • Diamon/Laser core responsibilities: • data acquisition for various sources – push & pull modes • communication through layers, publishing • data transformation, processing, analysis, conditioning • failover, database separation, scaling • complementary functionalities: • rules engine • logging, archiving, configuration updates • non-core: • definitions & rules, data provider’s workflow • GUIs, dedicated client applications 3

  4. Importance of a change Nowadays: • upgrades mess • stale sources, legacy constraints, dusty components • meeting only part of the requirements • partial data, simplistic thresholds, no conditioning • monitoring based on alarms - Diamon on Laser • misuse of the system, overhead • mediocre scaling • badly-behaving sources/clients cannot be detached/isolated • legacy solutions hiding in Laser world • SonicMQ, OC4J, EJBs, Hibernate, Sleepycat • ...and no standard CO components in place 4

  5. Importance of a change We would like to: • widely use CO solutions and help to establish others worth it • better drive interactions with the bottom and upper tiers • consolidate logics in system core • be able to draw more conclusions on collected data • sustain upgrades, updates, dependency changes 5

  6. Transition 6

  7. Transition 7

  8. Transition 8

  9. Transition 9

  10. Transition 10

  11. Technical outcome Then: • unified view of complete data • sound construction on CO materials • OC4J -> Spring, Hibernate -> JDBC Templates, EJBs -> POJOsSonicMQ -> ActiveMQ, multitude of data formats -> JAPCin-house database -> CCDB + Logging • wider choice of middleware: CMW, YAMI, JMS • focusing logics in one place • rules, thresholds, alarm generation • conditioning: machine mode, working hours, state of subsystems • driving the whole process • detachable sources, clients • less restart/reboot downtime • convergence with similar efforts 11

  12. other apps consoles Core client inter actions client middleware: JMS Core Services JAPCprovider w-flow publisher ConfigDB alarm generator externals conditions rules thresholds archiver ArchiveDB information exchange DAQ [1] DAQ [2] DAQ [n] tackle problematic sources Clic agents CMW srcs japc-ext-yami japc-ext-cmw japc-ext-snmp jms-to-japc proxy japc-ext-? ? other sources source middleware protocols Wie, Twi Laser 12

  13. Available technologies Building blocks: • CO standard components eg. JAPC • CO less standard, but demanded eg. japc-ext-yami • Laser devcode base, ready to reuse eg. caching, distribution • TIM being refurbished and struggling with similar future • other CO projects: INCA 13

  14. Lessons learned from Laser • eradication of legacy concepts & technologies pays off • to scale well, we need to distribute processing power • a supervisor over multiple workers • dividing is difficult • separation of concerns • single processing unit as a set of loosely coupled actions • patterns: chain of responsibilities, observable • upward flow: source input -> processing -> publishing • cater for database outages through local caching • JMS+RMI as a middleware for all parties involved • more advanced features of JMS channel help • ...but no JAPC concepts were introduced at all 14

  15. Device/property model • to clarify the current incoherent view • strong incentives: • speak one language • let everyone fit (Logging) • common tools, well-established in CO world • migration of legacy sources • expose properties • eradicate laser-source as it is • interaction with devices through JAPC calls • domain entities: • Alarm, Alarms, Diagnostic,... 15

  16. JAPC extensions • to unify view on a variety of devices/applications • available now: • CMW (japc-ext-cmwrda) and JMS (remote) as most popular • we need some more: • YAMI through japc-ext-yami is already available, used in new Clic agent • SNMP through japc-ext-snmp under development 16

  17. Bottom tier Experience gained in: • current Diamon and Laser • eg. CMW Alarm Monitor as a huge data acquisition process • LHC Concentrators, similar problems • INCA data acqusition layer Progress: 17

  18. Already developed • JAPC extensions • current version of YAMI meets our requirements • Diamon Clic agent • fully YAMI-based, JAPC values • CMW Alarm Monitor refurbished • 6000 subscriptions to GM and FESA devices • will be incorporated into new core • proxy between Laser production alarms and JAPC values • for the legacy sources 18

  19. Open questions Still to think over: • best components and collaborations to choose • level of distribution and scaling • shared memory, distributed cache • storage: short-term, long-term, Logging DB • rules handling: homemade, proprietary • ...and many others 19

More Related