1 / 48

Introduction to mTCA and LLRF review

Introduction to mTCA and LLRF review. Tom Himel June 4, 2012. Outline. History of the mTCA projects at SLAC History of the SLAC control system Phase I of the upgrade for LCLS Decision to use µTCA architecture for new development Description of µTCA Advantages of µTCA

kaia
Download Presentation

Introduction to mTCA and LLRF review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to mTCA and LLRF review Tom Himel June 4, 2012

  2. Outline • History of the mTCA projects at SLAC • History of the SLAC control system • Phase I of the upgrade for LCLS • Decision to use µTCA architecture for new development • Description of µTCA • Advantages of µTCA • Pros and Cons and our decision making process • Summary

  3. Brief History of SLAC mTCA projects (1 of 4) • In 2008 as LCLS construction was being completed, started an upgrade of the SLC control system which still ran the linac. • This replaces VMS Alpha mainframe, multibus micros and a proprietary network with linux servers, VME micros (IOCs) and EPICS • Still leaves 1980’s era CAMAC modules • This upgrade is complete except for a couple of loose ends.

  4. Brief History of SLAC mTCA projects (2 of 4) • In 2010 started project to develop hardware to replace CAMAC in linac • Idea was to upgrade the middle 10 sectors that FACET now uses 4 months per year and LCLS-II will use full time in 2018 when it is complete. • Developed things for which we had no existing solution we were happy with • First LLRF for which only the CAMAC solution existed • Then stripline BPM where we were unhappy with the LCLS solution

  5. Brief History of SLAC mTCA projects (3 of 4) • Decided to use mTCA for physics form factor • Had review in Dec. 2010 to aid in this decision. • Their report is at https://slacportal.slac.stanford.edu/sites/ad_public/events/mtca_llrf_jun_2012/Published_Documents_2010/SLACLinacUpgradeReviewReport.pdf • Their main recommendation was “Revisit MicroTCA for Physics Decision after Installation of Prototype System • MicroTCA for Physics is a very attractive technology for the sophisticated applications discussed in the upgrade proposal. However there are few COTS products currently available for use by the upgrade group. Therefore the upgrade group should set a threshold for what they consider a viable number of COTS products and suppliers necessary in order to determine that the MicroTCA for Physics standard has sufficient commercial support to remain a viable standard for the foreseeable future. The final decision to proceed with this standard for the production run of the linac upgrade should only be made if that threshold is met or exceeded at the end of the current year when the prototype unit has been completed.” • That is why we are here today

  6. Brief History of SLAC mTCA projects (4 of 4) • It has become clear that all AIP money in next 5 years will be used for LCLS and LCLS-II improvements, not for upgrading the CAMAC hardware. • Some new hardware for LCLS-II can be done with the mTCA electronics we have developed. Upgrading the linac would then be done later. • Some would like to do this to avoid installing still more of the old style hardware and as a path to the future upgrade. • Others are concerned about the budget and schedule risk to LCLS-II this path incurs. • This is a second reason for this review.

  7. A few facts • There is more than one way to upgrade the CAMAC and equip LCLS that will work. • This is a third reason you are here. If there were only one workable solution the decision would be easy. • We chose our upgrade path some time ago. So many of the following slides and decision considerations are old as we have not kept revisiting the decision.

  8. The history of the SLAC control system • Reason for describing the history: • You need to know what we are upgrading from and what we could clone if we chose to. • It helps to know what we are upgrading to see if we have chosen the right solution

  9. In the beginning • God created the heavens and the earth • … • On the eighth day (AKA early 1980’s) he created the SLC control system. • God looked at everything he had made, and he found it very good[1]. [1] after hundreds of person-years of effort by descendents of those created on day five.

  10. Block diagram of SLC control system VMS host SLCnet … Multibus I micro: CW01 Multibus I micro: LI00 Multibus I micro: LI30 Serial CAMAC CAMAC crate CAMAC crate Video screens … … CAMAC crate CAMAC crate

  11. Description of SLC control system • Central host VMS system runs all high level apps, operator consoles, and has central fast DB. • Multibus-I micros (originally 8086 CPUs, upgraded to 386 and 486) control everything in their sector. • Above connected with SLCnet, a proprietary polled network with cable TV as its physical media and the VMS interface as the master polling device. Ethernet was young and did not work well enough at that time.

  12. Description of SLC control system • Each micro controls several CAMAC crates via a proprietary serial CAMAC link. • Both commercial and proprietary CAMAC modules readout and control all hardware • 32 channel autoranging ADC for thermocouples, small magnet readout etc. • 16 channel DACs for small magnet control, profile monitor lamp brightness etc. • 1 channel Power supply controller couple to external chassis for analog and interlock control of large power supplies • BPM modules (several flavors) • 16 channel Programmable delay units for triggers • 32 channel digital input and output modules • 1 channel Klystron controller with 8088 CPU coupled to external chassis (MKSU).

  13. Expansion to PEP-II • Mainly expanded the SLC control system with some upgrades • New micros communicated with VMS via ethernet • New more precise power supply controllers got digital information from the micro via bitbus • New types of functionality (longitudinal feedback and CW RF) were implemented in EPICS • Cross-system SLC high level apps (correlation plots, multi-knobs, configs, history plots …) were made to handle EPICS items

  14. LCLS • All new devices are controlled with EPICS • New high level applications were written (mainly in Matlab) to control all the new EPICS devices and extended to control most of the old SLC linac devices • LI20-30 linac magnets and BPMS were upgraded to EPICS hardware as AIP projects after LCLS project completion • Klystrons, timing, vacuum, analog and digital status are all still in the SLC control system.

  15. Upgrading of the SLC control system • As previously mentioned in 2008 we decided to upgrade the old control system • Was significant source of downtime • Virtually everyone knowledgeable in VMS, FORTRAN or CAMAC had retired or been laid-off. • Decided to do it in two phases: • Phase I gets rid of VMS, SLCnet and the micros and is software dominated. • Phase II replaces CAMAC with modern hardware and is hardware dominated • Phase I is virtually complete for last 10 sectors (LCLS) • Phase II module design is well advanced and being proposed for use in new LCLS-II installations before replacing existing linac hardware.

  16. Controls downtime causes

  17. Controls downtime causes • There is about an even split between micro and CAMAC downtime. • The timing downtimes are with the old timing system, not the EVG/EVR system. • We’ve also had significant (multi-hour) downtimes due to SLCnet interface to the VMS computer and the old MPS system that luckily occurred during scheduled MD or maintenance and so don’t count in above statistics. • Can’t count on luck. Need to fix.

  18. Phase I upgrade block diagram VMS host Archiver server … Cmlog server Operator console SLCnet Ethernet … … Multibus I micro: LI00 Multibus I micro: LI19 VME IOC: LI20 VME IOC: LI30 Serial CAMAC Serial CAMAC CAMAC crate CAMAC crate CAMAC crate CAMAC crate … … … … CAMAC crate CAMAC crate CAMAC crate CAMAC crate

  19. Phase I upgrade • Add 1 VME EPICS IOC per sector connected with new Ethernet network • Build VME serial CAMAC interface (PSCD) that uses commercial hardware serial IO card with our firmware • Produce all necessary CAMAC drivers, device support, EPICS DB, displays, and high level apps • Test it all without impacting LCLS • Switch over on a maintenance day

  20. Status of Phase I upgrade • Phase I upgrade is complete in LI20-30. • These linac SLCNET micros have not been used for LCLS for months. • The 360 Hz timing information is now distributed via the new EVG/EVR system, but the nanosecond level timing still comes from the old distribution and PDUs. • Still have CAMAC. This and timing and PPS are largest contributors to controls downtime. • The EPICS VME system to replace the single SLCNET micro used as the Master Pattern Generator is awaiting final testing by operations. • Sectors 0-19 (and Damping rings and e+ source) are still using SLC control system and actively being used to run FACET four months a year.

  21. Why upgrade the linac CAMAC? • If that isn’t obvious to you after the last talk, we have invited the wrong reviewers. • Want better reliability and maintainability • Want components for which we can get repair parts • Want mostly commercially available components • Want to use modern technology that new people are willing to support

  22. The architecture choice • In 2010 we have spent considerable time and many group meetings deciding on the architecture to use. • Decided on µTCA • Will first describe µTCA to you • Then explain its advantages • Then share the pros and cons matrix that went into our decision making process

  23. Genealogy of µTCA • ATCA (Advanced TeleCommunications Architecture) is a standard developed for the telecommunications industry. • Emphasis was on high availability and high bandwidth. • There are many commercial modules available • Modules are physically large (~fastbus size) • Connections to smaller daughter boards are part of the standard.

  24. Genealogy of µTCA • These daughter boards are called AMC (Advanced Mezzanine Cards). • Several can be mounted on an ATCA carrier card. • Often the carrier card must be customized for the particular AMCs used to route in the necessary I/O from the cables that go to the RTM (Rear Transition Module) attached to the ATCA card. • Some small projects can be done with ONLY AMC cards.

  25. Genealogy of µTCA • This led to first the µTCA standard and then to the µTCA for physics standard. • The physics standard is twice the size of the minimum sized µTCA (AMC) card and has a connector for an RTM. It is backplane compatible with a standard µTCA card and simply defines the use of some spare lines on the backplane. • It is the µTCA for physics we plan to use and it will hereafter be referred to simply as µTCA. • The µTCA standard was developed as an international industry/lab committee under the auspices of PICMG. • A small but growing number of commercial products are available. Ray will elaborate in his talk

  26. µTCA features • IPMI: Standard out-of-band network to monitor temperatures, fan speeds, voltages of both crates and modules. Allows remote control of power to individual modules. Standard software available to implement all of this. • Redundant hot-swappable fans allowing this most commonly failing component to be replaced without program interruption • Ability to have redundant power supplies and network hubs • Timing distribution provided on backplane

  27. µTCA features • Truly hot swappable modules and Rear Transition Modules (RTMs). • Allows bad modules to be swapped without added degradation of the control system. • This in turn allows more modules to be in a crate and hence fewer crates without degrading system availability. • Can even put multiple systems in a crate (e.g. BPM and LLRF) • Split between AMC and RTM allows an AMC module to be used for several purposes by having different relatively simple RTM cards. E.g put the ADC on the AMC module and have RTM cards with different signal shaping for BPMs and toroids. • Uses point-to-point communications instead of busses. Allows for high bandwidth and avoids subtle bus problems where a problem in one module causes problems in another. • Low noise environment suitable for analog electronics. • Solid, well tested mechanical and connector designs.

  28. µTCA backplane

  29. Keying User I/O PowerSystem Mgmt AMC µRTM Standard AMC Connector and Backplane AMC & µRTM Modules – µTCA.4

  30. Industry Prototypes:6-Payload Shelf µRTM • Development Shelf 6-Slot • Physics Backplane • Non-Redundant MCH, Fans, Power Module µRTM

  31. µTCA.4 Development Platform • SLAC Linac controls upgrade • 6-Slot Prototype Shelf w/ MCH, Processor, Interim Timing System, power module, built-in fans • PMC Event Receiver (EVR) on double µTCA Adapter • Shelf non-redundant • All rear I/O access

  32. 12-Payload Shelf • Full µTCA.4 Compatibility • Fully redundant MCH, power, fans

  33. Slow I/O can be done with IP cards There is also a PMC carrier that we presently use for our timing card

  34. µTCA summary • Scaleable modern architecture • From 5 slot μTCA …full mesh ATCA • Gbit serial communication links • High speed and no single point of failure • Standard PCIe, Ethernet communication • PCIe and Ethernet is part of Operating System • Redundant system option • Up to 99.999% availability • Well defined management • A must for large systems and for high availability • Hot-swap • Safe against hardware damage and software crashes

  35. Our Architecture decision • We briefly looked at many standards • https://slacportal.slac.stanford.edu/sites/ad_public/events/mtca_llrf_jun_2012/Published_Documents/LCLS%20Next%20Generation%20Control%20System%20platform%20discussion.pptx • Carefully compared network attached devices (rack mounted chassis with Ethernet ports), VME and µTCA. • We expect to end up with a mixed system, so really deciding what standard to use for new and improved things.

  36. Why not simply clone what we just did for LCLS-I • Some parts not done at all (linac LLRF) • Even LCLS klystrons that have new PADs and PACs depend on the old CAMAC system for interlocks and diagnostic data • Unhappy with other parts (stripline BPM) – more later • We likely will clone some of the parts like PLC system for vacuum and perhaps Beckhoff for temperatures and misc I/O

  37. LCLS BPM rack Front Rear

  38. LCLS BPM rear close-up

  39. LI20 LCLS network rack

  40. BPM chassis • Each has: • 4 signal cables (unavoidable) • A trigger at beam time cable • A calibration trigger cable • An ethernet port for channel access • An ethernet port used to pass raw data at 120 Hz to a VME IOC for processing as the internal CPU is too slow • A serial connection to a terminal server to allow viewing of the IOC console • A power cable to an ethernet controlled power strip so power can be cycled to perform a remote reset.

  41. BPM chassis • This was a design kludged together from available parts in 4 months when originally planned design for LCLS failed. • Was then propagated to 10 linac sectors as didn’t have time to do a proper redesign and wanted its improved analog performance. • It works! Physicists are quite happy. But REALLY don’t want to propagate this again! Needs a design using a crate e.g. µTCA.

  42. BPM in µTCA • Each module has: • 4 signal cables (unavoidable) • A trigger at beam time cable • A calibration trigger cable • An ethernet port for channel access • An ethernet port used to pass raw data at 120 Hz to a VME IOC for processing as the internal CPU is too slow • A serial connection to a terminal server to allow viewing of the IOC console • A power cable to an ethernet controlled power strip so power can be cycled to perform a remote reset. On backplane On backplane PCIe on backplane to CPU PCIe on backplane to CPU Only CPU has one IPMI handles this

  43. VME situation • VME is almost 30 years old: our system should operate for another 20-30 years. • Number of new developments is decreasing, sales are still constant • Bus technology has speed limitations • Wide busses create a lot of noise in analog channels • No standard management on crate level • No management on module level • So far no extension bus survived • One damaged bus line stops a whole crate • Address and interrupt misconfigurations are hard to find

  44. Reasons for choosing µTCA • Use an industry standard to share with others • Redundant fans and hot swappable fans and modules allows for troubleshooting and maintenance during user runs and improving reliability • Cable plant reduction compared to network attached devices • Systems can share crates with minimal impact

  45. Reasons for choosing µTCA • Firmware can be remotely loaded (presently we bring each module to the lab for this) • Standard system to monitor temperatures and voltages • A new standard rather than one nearing retirement • Modular so pieces of it can be upgraded • Allows use of new technology that allows us to challenge and keep good engineers.

  46. Decision spreadsheet • The presentation so far has been one-sided, listing the advantages of µTCA and no disadvantages. • We were much more balanced in our decision making process. • The spreadsheet at the same site as this talk contains the detailed pros and cons list that was a key part of our decision making process. https://slacportal.slac.stanford.edu/sites/ad_public/events/mtca_llrf_jun_2012/Published_Documents/LLRF%20design%20compare4.xls

  47. More details in later talks • Ray will explain more about mTCA and to what extent industry and labs are using it. • Qiao will give technical details on the LLRF development • Charlie will give the status of the infrastructure hardware and software needed to support our I/O modules • Dan will give technical details on the stripline BPM development • I will outline the plans for upgrading the linac • Qiao will go over the LCLS-II injector LLRF system design including cost and schedule

  48. Summary • The linac control system clearly needs upgrading. • We can use the same technology to build the LCLS-II. • µTCA is a good choice for the architecture • There are other architecture choices that would also work. Are they enough better than mTCA to warrant us changing?

More Related