1 / 70

R. Brough Turner NMS Communications rbt@nmss October 1, 2003

R. Brough Turner NMS Communications rbt@nmss.com October 1, 2003. Acknowledgements. This tutorial draws heavily on the work of Chuck Byers, Lucent Technologies Jim Kennedy, Intel Mark Overgaard, Pigeon Point Systems Michael Thompson, Pentair/Schroff Henry Wong, Motorola

mgann
Download Presentation

R. Brough Turner NMS Communications rbt@nmss October 1, 2003

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. R. Brough Turner NMS Communications rbt@nmss.com October 1, 2003

  2. Acknowledgements • This tutorial draws heavily on the work of • Chuck Byers, Lucent Technologies • Jim Kennedy, Intel • Mark Overgaard, Pigeon Point Systems • Michael Thompson, Pentair/Schroff • Henry Wong, Motorola who jointly presented a more detailed half-day AdvancedTCA seminar on May 21, 2003, see: • http://www.picmg.org/advancedTCA_Tutorial_0503.stm • I am also indebted to Joe Pavlat, President of PICMG, for diverse content and advice

  3. Today’s Goal • Describe the AdvancedTCA platform, and its applications, in enough detail so that you can make informed decisions Outline • AdvancedTCA background and market focus • Technical overview • Details, details, details • Application examples • Current market status • Conclusions and Q&A

  4. Evolving Telecom Market • Much hyped voice-data convergence • Slower than once predicted, but happening • Core network data rates increasing • Network equipment performance limited by I/O • Enhanced service platforms leveraging standard computers and operating systems • Frequently SPARC Solaris or Linux on Intel • Tier 1 equipment providers have downsized • Seeking to outsource and leverage COTS technology

  5. Shortfall of Existing Standards • Board area, board spacing, power and cooling inadequate for existing and emerging silicon • Backplane capacity limited • Diverse system management approaches • Difficulty meeting availability objectives • Legacy bus (VME, PCI) limitations • Power distribution issues • Poor match to telecom equipment practices • Chassis mechanical standards don’t match equipment frames or support needed I/O • Systems fail to leverage telecom power architectures

  6. AdvancedTCA Objectives • Meet the evolving needs of communications network infrastructure • Telecom focus • Edge, core and transport, as well as data center • Wireless, wireline and optical network elements • High levels of service availability (99.999% or more) • Scalable performance and capacity • Reduce development time and total cost of ownership • Open architecture, modular components sourced by a dynamic, interoperable, multi-vendor market

  7. AdvancedTCA Features • Serial links & switch fabric technology • Redundant star or full mesh data transport, with switch fabric alternatives, can scale to 2.5Tb/s • Dual redundant Ethernet control plane • Large board area supports latest silicon • Power and cooling for up to 200 watts per slot • 600 mm (~24”) ETSI frame equipment practice • Provision for 19” and 23” versions • Sophisticated system management • Advanced software infrastructure for OAM&P • Health monitoring; module power control; electronic keying; active cooling control

  8. AdvancedTCA Features (cont.) • Telecom centric I/O • Multi-protocol support for interfaces up to 40 Gb/s • Front-to-back clearances for simultaneous front and rear I/O including fiber bend radii • Optional rear transition modules for rear I/O • Availability scales to 99.999% or more • Single board failure domain • Hot swap capability for all boards & active modules • Redundant -48 VDC power feeds • Redundant control and data planes • Sophisticated system management

  9. Attribute PICMG2 / CPCI PICMG2.16 / CPSB PICMG3 / ATCA Board Size 6U x 160mm x .8” 57 sq in + 2 Mez 6U x 160mm x .8” 57 sq in + 2 Mez 8U* x 280mm x 1.2” 140 sq in +4 Mez Board Power 35-50W 35-50W 150-200W Backplane Bandwidth ~4Gb/s ~38Gb/s ~2.4Tb/s # Active Boards 21 19 16 Power System Central Converter 5, 12, 3.3V Backplane Central Converter 5, 12, 3.3V Backplane Distributed Converters Dual 48V Backplane Management OK OK Advanced I/O Limited OK Extensive Clock, update, test bus No No Yes Regulatory conformance Vendor specific Vendor specific In standard Multi-vendor support Extensive Building Anticipated in 2003 Base cost of shelf Low Low - Moderate Moderate Functional density of shelf Low Moderate High Lifecycle cost per function High Moderate Low Standard GA Schedule 1995 2001 2H2003 Comparison With PICMG 2.X Source: Chuck Byers, Lucent Technologies

  10. AdvancedTCA History • Informal “Santa Barbara” group 7/2001 • PICMG 3 work group chartered 11/2001 with over 100 companies participating • Subteams: Form Factor, Backplane/Fabric, RASM • Base spec (PICMG 3.0) for mechanical, power, cooling, interconnect, and RASM properties • Sub-specs for data transport fabrics alternatives • PICMG 3.0 spec ratified 12/30/2002 • 430 pages; 11,000 person hours of meetings and conference calls; Untold hours of individual work • Real inter-operable products available today • Fifth inter-op workshop held September 2003

  11. Details, Details, Details... • Mechanical • Power & Thermal • Management • Data Transport • Mezzanine Cards • Regulatory

  12. Mechanical Configuration • 8U boards in 12U chassis • 16 slots in 600 mm frame • 14 slots in 19” cabinet • 1.2” board pitch allows heat sinks plus rear SMT • Forced air cooling for up to 200 watts per slot • Front and rear fiber bend area in 600 mm depth • Simplified sheet metal construction • ETSI & NEBS vibration, shock and serviceability

  13. Chassis Mechanics, Depth • 600 mm cabinet depth • 25 mm thick doors • 100 mm front cable/air area • 60 mm rear cable/air area • 390 mm remains for chassis • 388 faceplate to faceplate Source: Michael Thompson, Pentair/Schroff

  14. Sheet Metal Construction • Lower cost in high volume • Looser tolerances • Earthquake performance • 1.6 mm to 2.4 mm slots ESD clip Guide Rail funnel Retention Screw receptacle Alignment Pin receptacle Source: Michael Thompson, Pentair/Schroff

  15. Board Mechanics • Front board 8U x 280 mm • Rear board 8U x 70 mm • Connects directly to front board • Board width 6HP (1.2”) • PCB thickness: 1.6 mm – 2.4 mm allowed • Simplified telecom packaging • Provisions for 4 PMCs • Alignment/key pins Source: Michael Thompson, Pentair/Schroff

  16. Face Plate • Sheet metal solution • EMC gasket on left side • Less prone to damage • M3 retaining screw • No tool required • Handle actuated micro switch for Hot Swap • PCB offset 0.1” for all boards • Alignment/ground pin on front panel Source: Michael Thompson, Pentair/Schroff

  17. Rear Transition Module (RTM) • 8U x 70 mm form factor • Same mechanics as front board (mirror image) • Alignment and keying • To backplane • To front board • Front board to RTM connector not specified • Application-specific vendor choice Source: Michael Thompson, Pentair/Schroff

  18. ESD Discharge ESD discharge strips on bottom edge (front board and RTM) • Segment 1: 10 Mohms to face plate • Discharges installer • Segment 2: 10 Mohms to logic ground • Discharges board • Segment 3: 0 ohms to face plate • Solid ground even if the board is not mated with the backplane • Keep out area for isolation Source: Michael Thompson, Pentair/Schroff

  19. AdvancedTCA Backplane • Zone 3 for front board to RTM interconnection • Connector not defined by ATCA; could be fiber, coax, signal, etc. • Separate keying • Zone 2 for base interface and fabric • Zone 1 for power and system management Source: PICMG

  20. Backplane • Connector population options depend upon fabric architecture • Full mesh backplane shown here Zone 3 Zone 2, J20 Zone 2, J21 Zone 2, J22 Zone 2, J23 Zone 2, J24 Node slots Base Interface Hub slots Zone 1, Power & Management Source: Michael Thompson, Pentair/Schroff

  21. Details, Details, Details... • Mechanical • Power & Thermal • Management • Data Transport • Mezzanine Cards • Regulatory

  22. Power Requirements • -48/-60 VDC power input • -40.5 to -57 VDC, -50 to -72 VDC • Redundant power inputs • Distribution of ringing voltages • Capacity of over 3,200 Watts per shelf • Local power conversion • DC-DC converters on each board Source: Michael Thompson, Pentair/Schroff

  23. Power Distribution • Redundant power inputs • Positronic power connector with sequenced pins • Other power distribution architectures are allowed Source: Michael Thompson, Pentair/Schroff

  24. Zone 1 Connector Source: Michael Thompson, Pentair/Schroff

  25. Board Power, Simple Source: Michael Thompson, Pentair/Schroff • Single DC-DC converter fed through four diodes from both DC inputs • Option for (two) separate DC-DC converters fed from both DC inputs (with or without diodes) • Inrush current limiting and fusing

  26. Board Power, Complete • A board may only consume 10 W until more power is negotiated with the Shelf Manager Source: Michael Thompson, Pentair/Schroff

  27. Thermal Principles • Central office environment • NEBS/ETSI; N+1 Cooling; 72 hr A/C failure means high ambient temperatures • 200 W per board with forced air cooling • 16 boards in a 12U shelf, ~ 3200 watts/shelf • 3 shelves per frame, ~ 10 kW frame • Provide framework for interoperability • Designer, manufacturer, system integrator • Define thermal interfaces • Provide design guidelines

  28. Computational Fluid Dynamics Source: PICMG Extensive Simulations Run • Determine “up front” if 200 W per slot was possible • Followed with flow chamber measurements • Final design supports • Up to 200 W per slot • Less than 10 °C temperature rise, with • 55 °C intake air and one failed fan

  29. Slot Fan Flow Empty Slot Impedance DP CFM Shelf Cooling AIR OUTLET • Front to rear airflow “should” • Side to side airflow “may” • Zone 3 airflow seal • NEBS air filter • Cooling arrangement not specified • Flow curve requirements • Pressure drop vs. flow rate TOP PLENUM FANS FRONT BOARD RTM AIR INLET BOTTOM PLENUM Source: Henry Wong, Motorola

  30. Front Board DP Board Impedance Even inlet airflow distribution CFM Front Board • 200 Watts per slot • Airflow direction • Bottom to top • Right to left (front view) • Cooling requirements • CFM = f(power) @ 10 °C • Pressure drop vs. flow rate Source: Henry Wong, Motorola

  31. Slot + Board Impedance (system integrator) Slot Fan Flow (shelf vendor) Operating Point Empty Slot Impedance (shelf vendor) DP Board Impedance (board vendor) CFM How Much Flow Per Slot? • System integrator incorporates data from shelf and board vendors • Simple pass/fail test based on flow (not temperature) Source: Henry Wong, Motorola

  32. 1. Discovery 2. Normal Operating 4. Repair 3. Abnormal High/Low Ambient Fan Failure Temperature Board or Component Over Temp etc... Clogged Filter Thermal Management • Temperature sensors • Shelf inlet “shall” • Shelf outlet “should” • Field replaceable units (FRUs) • Thermal alarming • Critical, major, minor, normal • Thresholds, hysteresis • Rules • FRU, ShMM, System Manager Source: Henry Wong, Motorola

  33. Details, Details, Details... • Mechanical • Power & Thermal • Management • Data Transport • Mezzanine Cards • Regulatory

  34. ATCA™ Shelf Management • Monitor & control low-level aspects of ATCA boards and other field replaceable units (FRUs) within a shelf • Watch over basic health of the shelf, report anomalies, take corrective action when needed • Retrieve inventory information & sensor readings • Receive event reports and failure notifications from boards and other intelligent FRUs • Manage power, cooling & interconnect resources in the shelf • Enable visibility into a shelf for a logical System Manager — some mix of software + “swivel chair folk” Source: Mark Overgaard, Pigeon Point Systems

  35. ATCA™ Management Approach • Focus on low-level hardware management • Required on all boards and shelves • Monitor/control of FRUs in shelf • Adopt Intelligent Platform Management Interface (IPMI) 1.5 Revision 1.1 as foundation • IPMI widely used in PC and Server industry • Facilitate supplementary higher level services • Shelves must provide IP-compatible transport • “In-band” application management expected, but not specified Source: Mark Overgaard, Pigeon Point Systems

  36. ATCA Shelf w/ Dedicated Shelf Management Controllers Source: PICMG

  37. Shelf Manager Services • Access inventory information for all FRUs • Manage power consumption • Using FRU information (from non-volatile store on FRU) • Simplified framework for cooling management in R1.0 • Using FRU-configured temperature threshold events • Manage distributed sensors • Based on Sensor Data Records (SDRs) • Manage data transport interconnect compatibility • Based on FRU information • Collect IPMI events in persistent store and, optionally, perform configurable actions in response • Platform Event Filtering (PEF) supports configurable actions on events (e.g. an SNMP trap)

  38. Implementation Options Source: PICMG

  39. IPM Controller ExtensionsBeyond IPMI 1.5 Commands and FRU Information • Redundant connections to shelf manager • Hot swap state management for FRUs (including represented FRUs) • Compatibly extends the CompactPCI operator interface including blue LED and handle switch • Electronic keying • Commands + FRU information • LED management, including color, lamp test • Fan control for interoperable fan trays • Payload power control & negotiation w/ shelf

  40. Board-SpecificManagement Infrastructure Key ATCA Board & IPMC Interfaces Source: Mark Overgaard, Pigeon Point Systems

  41. ATCA Management Summary • Significant PICMG effort defines interoperable extensions to IPMI • 30% of PICMG 3.0 specification devoted to Shelf Management • Sophisticated system management to support telecom reliability, availability and serviceability • PICMG 3.0 specification ensures boards, backplanes and chassis from different vendors can work together as a communications system

  42. Details, Details, Details... • Mechanical • Power & Thermal • Management • Data Transport • Mezzanine Cards • Regulatory

  43. Backplane Performance Needs • Different network elements require different raw backplane capacities, for example: • POTS switch (20k lines) 2.5Gb/s • Voice gateway (50k ports) 10Gb/s • DSLAM (4k lines @ 10Mb/s) 40Gb/s • Metro optical mux (1k ports @OC-3) 160Gb/s • Fiber to the Home (4k ports @100Mb/s) 500Gb/s • Core router (256 ports @ OC-192) 2.5Tb/s Source: Chuck Byers, Lucent Technologies

  44. AdvancedTCA Backplane Goals • Improve upon existing Modular Open Systems • Independent rack-mount servers (pizza boxes) • Integrated systems of modular elements (CompactPCI) • Performance headroom • Differential signaling capable of 10 Gbps (XAUI), per fabric channel, today • Flexibility & scalability to address many markets • Single backplane configuration may support different fabric technologies and topologies • Cost reduced configurations • Carrier-grade availability • No single point of failure for any system interconnect

  45. PICMG 3.0 Specification • Power Distribution • Mechanical Elements • System Management • Regulatory Guidelines • Connector Zones and type • Fabric Topology • Thermal Management Guidelines PICMG 3.1Specification Ethernet & Fibre Channel PICMG 3.2 Specification InfiniBand PICMG 3.3 Specification StarFabric PICMG 3.4 Specification PCI-Express & Advanced Switching PICMG 3.5 Specification RapidIO & Advanced Fabric Interface** • Backplanes are fully defined in the PICMG 3.0 specification • Interoperable Boards are defined in Subsidiary specifications AdvancedTCA and Fabrics Source: Jim Kennedy, Intel Corporation

  46. Timing Clocks • (6 x 1 diff. pair each) • Update Ports • (10 diff. pairs each) • Fabric Interface • Full Mesh shown • (8 diff. pairs each) Zone 2 • Base Interface • Ethernet Star • (4 diff. pairs each) • IPMB* • (2 traces each) Zone 1 8 Ring/Test Lines • -48VDC Power • (2 feeds of –48V, RTN) Node Slots Node Slots Hub Slots Mesh Slots PICMG 3.0 Backplane (Logical View) Source: Jim Kennedy, Intel Corporation

  47. Hub Board • Supplies switching resources between all other boards in the shelf • Base only (PICMG 3.0), fabric only, or base + fabric configurations possible • Requires many Zone 2 connectors • Installed in designated Hub slots Rear I/O (Zone 3) Hub Board • Mesh Enabled Board • Supports a direct channel connection between all other boards in the shelf • Fabric only, or fabric + base configurations possible • Requires many Zone 2 connectors • Installed into any slot Mesh Enabled Board Base, Fabric Connectors (Zone 2) • Node Board • Single, dual, many star connections possible • Base only (PICMG 3.0), fabric only, or base + fabric configurations possible • Single Zone 2 connector • Installed into any node slot Node Board Power Connector (Zone 1) ATCA Board Configurations Source: Jim Kennedy, Intel Corporation

  48. Nodes support a point-to-point link to the switch hub Number of routed traces remains relatively low keeping backplane costs low Dedicated system slot for switching resources (Hub) Fabric Topologies — Star Application Target • Non-carrier-grade applications with little latency sensitive data traffic • Unified approach reduces cost, complexity Source: Jim Kennedy, Intel Corporation

  49. Redundant switches increase system availability, eliminates single point of failure Nodes support redundant links, one to each switch Number of routed traces remains relatively low keeping backplane costs low Link between switches facilitates coordination, fail-over Two dedicated system slots for switching resources Fabric Topologies — Dual Star Application Target • Carrier-grade applications with non-latency-sensitive data requirements (e.g., modular server) • Unified approach reduces cost, complexity Source: Jim Kennedy, Intel Corporation

  50. Fabric Topologies: Dual-Dual Star Each Node supports 4 links, one to each switch resource Separate, dedicated fabrics for Control and Data Plane Link between each function specific redundant switch Application Target • Carrier grade applications with latency sensitive streaming data requirements and significant control & mgmt (TCP/IP-based) workload. • Separate data plane fabric allows optimized data throughput Source: Jim Kennedy, Intel Corporation

More Related