Modern Trends in - PowerPoint PPT Presentation

slide1 n.
Skip this Video
Loading SlideShow in 5 Seconds..
Modern Trends in PowerPoint Presentation
Download Presentation
Modern Trends in

play fullscreen
1 / 27
Download Presentation
Modern Trends in
Download Presentation

Modern Trends in

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Modern Trends in Backplane Interconnection By Ken Uemura

  2. A Backplane is a circuit board (usually a printed circuit board [PCB]) that connects several connectors in parallel to each other, so that each pin of each connector is linked to the same relative pin of all the other connectors, forming a computer bus. (wikipedia)

  3. PCI has emerged as the most pervasive interconnect and backplane drive technology, which was first introduced in the early 1990s as a chip-to-chip interconnect standard based on 32 bits of data that operated at 33 MHz on these modern systems. Many system design engineers viewed PCI, as a vehicle to address not only their chip-to-chip interconnect design requirements, but also to migrate PCI into the backplane for board-to-board interconnect as well.. PCI was never designed nor intended to be used in backplane applications or even in mid-plane interconnect applications. Nevertheless, many design engineers successfully deployed systems that utilized PCI as not only the chip-to-chip interconnect, but also as the board-to-board (backplane) interconnect. (fpgajournal)

  4. Parallel Backplane design served the industry well for many years, independent of, if the system utilized PCI or a proprietary parallel backplane arrangement. The challenge with parallel backplane interconnect arose as a result of the increased system bandwidthrequirements. (fpgajournal)

  5. The increased bandwidth requirements forced IC manufacturers and system design engineers to use wider (16, 32, 64, 128) data buses and increased operating frequencies (33 MHz, 66 MHz, 133 MHz, 266 MHz). Crosstalk(XT) refers to any phenomenon by which a signal transmitted on one circuit or channel of a transmission system creates an undesired effect in another circuit or channel. Given the larger data buses (> 64 bit) and higher bandwidths (>120 MHz), it has all but relegated parallel buses to chip-to-chip interconnect due, primarily to the length of interconnect material being driven. Reflection occurs when a signal is transmitted along a transmission medium, such as a copper cable or an optical fiber, some of the signal power may be reflected back to its origin rather than being carried all the way along the cable to the far end. Large buses operating at relatively high operating frequencies over long interconnect PCI traces results in many debilitating effects. The signal noise becomes intolerable due to transmission line effects in the way of crosstalk and reflections, which limit the usefulness of large, high-speed backplanes. (fpgajournal, wikipedia)

  6. Faced with the increased performance requirements of new technologies, such as 3G wireless, 10Gb Ethernet, OC192 transport, and multiple protocol networking equipment to name a few, design engineers had to find a solution that support the higher data rates, while increasing reliability and reducing cost. With these challenges, the industry turned toward the storage market for a viable solution. That solution came in the way of high speed serial interconnect. Serial interconnect for use in serial backplanes have many significant benefits over legacy parallel interconnected backplanes. The first and most important is the performance and reliable/robust operation of the serial connection. PCIe (PCI Express) implements serial. (fpgajournal)

  7. Serial The bonded serial format was chosen over a traditional parallel format due to the phenomenon of timing skew. Timing skew is a direct result of the limitations imposed by the speed of an electrical signal traveling down a wire, which it does at a finite speed. Because different traces in an interface have different lengths, parallel signals transmitted simultaneously from a source arrive at their destinations at different times. When the interconnection clock rate rises to the point where the wavelength of a single bit is less than this difference in path length, the bits of a single word do not arrive at their destination simultaneously, making parallel recovery of the word difficult. Thus, the speed of the electrical signal, combined with the difference in length between the longest and shortest trace in a parallel interconnect, leads to a naturally imposed maximum bandwidth. Serial channel bonding avoids this issue by not requiring the bits to arrive simultaneously.

  8. A Serializer/Deserializer (SerDes pronounced sir-dees) is a pair of functional blocks commonly used in high speed serial communications. These blocks convert data between serial data and parallel interfaces in each direction.

  9. The basic SerDes function is made up of two functional blocks: the Parallel In Serial Out (PISO) block (aka Parallel-to-Serial converter) and the Serial In Parallel Out (SIPO) block (aka Serial-to-Parallel converter). (PISO) (SIPO)

  10. The PISO block typically has a parallel clock in and a set of data input lines. It may use an external Phase-locked loop to multiply the incoming parallel clock up to the serial frequency. The simplest form of the PISO has a single shift register that receives the parallel data once per parallel clock and shifts it out at the higher serial clock rate. PLL compares the frequencies of two signals and produces an error signal which is proportional to the difference between the input frequencies.

  11. The SIPO block typically has a receive clock output and a set of data output lines. The receive clock may have been recovered from the serial clock recovery technique. The SIPO block then divides the incoming clock down to the parallel rate. Implementations typically have a double-buffer of registers. One register is used to clock in the serial stream, and the other is used to hold the data for the slower, parallel side.

  12. Implementations of SerDes are sometimes combined with implementations of encoding/decoding blocks in single blocks. The purpose of encoding/decoding is typically to place at least statistical bounds on the rate of signal transitions to allow for easier clock recovery in the receiver, to provide framing, and to provide DC balance.

  13. 8B/10B encoding A common coding scheme used with SerDes is 8B/10B encoding. This supports DC-balance, provides framing, and guarantees transitions. The guaranteed transitions allow a receiver to extract the embedded clock. The control codes allow framing, typically on the start of a packet. The typical 8B/10B SerDes parallel side interfaces have 1 clock line, 1 control line and 8 data lines. Such serializer plus 8B/10B encoder and deserializer plus decoder blocks are defined in the Gigabit Ethernet specification.

  14. 64B/66B encoding Another common coding scheme used with SerDes is 64B/66B encoding. This scheme statistically delivers DC-balance and transitions through the use of a scrambler. Framing is delivered through the deterministic transitions of the added framing bits. Such serializer plus 64B/66B encoder and deserializer plus decoder blocks are defined in the 10 Gigabit Ethernet specification. The transmit side is composed of the collection of a 64B/66B encoder, a scrambler, and a gearbox that converts the 66B signal to a 16 bit interface. A further serializer then converts this 16 bit interface into a fully serial signal.

  15. Benefits of Serial Backplanes

  16. Parallel –Multiple lines consume board space –Lines interfere with each other –Each line needs its own termination circuitry Serial –Fewer lines yields reduced board space –Line interference can be minimized –Uses a fraction of the termination circuitry vs. parallel

  17. Area Reduction By converting the “local” parallel data to serial, it greatly reduces the number of traces, thus allowing the reduction of the backplane size. The backplane PCB is the most expensive board in many systems and the largest. In fact, the actual size of the system backplane, in many cases is the limiting factor in allowing the system rack size to be reduced. Additionally, serial backplanes also allow smaller connections used to physically connect from the “local” PCB to the backplane, further reducing size and complexity of the system design, basically an 11 to 1 reduction. The two main reasons for implementing a serial backplane are (1) the high data throughput with reliable performance and (2) backplane PCB reduction. The latter is realized through smaller form factor of the system rack, fewer layers of PCB material, resulting in lower cost.

  18. Noise Reduction Current serial signaling technologies utilize a differential Input/Output (I/O) buffer. The differential buffers provide much smaller signal swings compared to historical single ended buffers. This reduced signal swing, results in a lower power I/O buffer, but more importantly, it significantly lowers noise. The noise reduction benefit is seen in much lower RFI/EMI, ground bounce and transmission line effects including crosstalk and reflections. EMI: Electromagnetic interference (also called radio frequency interference or RFI)

  19. Bandwidth Increase Design engineers, moving from parallel to serial backplanes have a multitude of options regarding their implementation choices. For example, an engineer wanting to convert from a legacy design that used PCI 32b/33MHz, for both the “local” side of the PCB and the backplane, which has a total aggregated bandwidth of 1.056Gbps (32b x 33MHz), could select a SerDes device that would take in the PCI local data, serialize it and transmit the data out at 1Gbps - or the designer could elect to provide 8 bits of data to 4 channels of SerDes operating at 256Mbps. Another option would be further increasing the data rate of the serial link. With today’s SerDes technology, the engineer can select from slower speed SerDes devices with more channels, or higher speed with fewer or even a single channel, SerDes devices. SerDes devices operate from 155Mbps, on the low end, up to 10Gbps on the high end, and incorporate two main signaling technologies, Low Voltage Differential Swing (LVDS) and Current Mode Logic (CML).

  20. Bandwidth Increase cont... As a general rule of thumb, LVDS operates from 155Mbps to 1.25Gbps. CML, on the other hand, operates from 600Mbps to 10Gbps. LVDS and CML can inter-operate, but require external resistors for level shifting. Therefore, it’s important that the design engineer consider their existing serial backplane requirements and future needs before embarking into a SerDes backplane design.

  21. Migration Path One of the many benefits of serial backplanes is the ability to migrate to higher speed serial interconnect as system bandwidth requirements increase. By incorporating a sound, high-speed backplane design methodology this migration capability can be supported. For example a user can go from 155Mbps to 850Mbps per channel by simply increasing the SerDes devices reference clock.

  22. Programmability – SerDes vs. ASSP The advantage of a programmable SerDes device is its inherent flexibility as a programmable device. The programmable fabric allows the user to customize the “local” side of the PCB. Therefore, the user can build in any local bus required either PCI or proprietary.

  23. Programmability – SerDes vs. ASSP cont. The flexibility of programmable logic, combined with SerDes, results in a reduction in component counts (PLD or FPGA + ASSP SerDes) and a shortened time to market. The programmable SerDes also allows for flexible I/O assignments, meaning that the user can select the optimal pin assignment that ease board layout and potentially eliminating PCB layers on the local board. Another advantage is in the area of I/O voltage levels and I/O type, both of which are a programmable selection with the programmable SerDes devices.

  24. Conclusion So, SerDes provided Backplane designers significant cost savings through lower PCB costs, smaller form factors, reduced power, lower EMI/RFI and a straightforward migration path to high data throughput.

  25. References Fpgajournal (2~6, 16, 17, 20~26) Wikipedia: (1) (7~14)