1 / 6

Peta-Cache: Electronics Discussion II Presentation Ryan Herbst, Mike Huffer, Leonid Saphoznikov

Peta-Cache: Electronics Discussion II Presentation Ryan Herbst, Mike Huffer, Leonid Saphoznikov Gunther Haller haller@slac.stanford.edu (650) 926-4257. Flash Storage Option. Skim Builder as in option discussed earlier Event server (cache box) as in other option

marcin
Download Presentation

Peta-Cache: Electronics Discussion II Presentation Ryan Herbst, Mike Huffer, Leonid Saphoznikov

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Peta-Cache: Electronics Discussion II Presentation Ryan Herbst, Mike Huffer, Leonid Saphoznikov Gunther Haller haller@slac.stanford.edu (650) 926-4257

  2. Flash Storage Option • Skim Builder as in option discussed earlier • Event server (cache box) as in other option • Shown as two boxes for simplicity, could be in one box (there are pro’s and con’s) • Issue is again interconnect speed • Up to 16 1-Terra-Byte Flash boxes for each event server • Each lane PCI-E 256 MByte/sec • 16-lanes gives total of 4-Gbytes/sec bandwidth • Each Flash box has only fraction of total event store • Flash has limited write-cycles so can’t frequently rewritten (need to enforce with some policy which is most important) • But don’t really want to “burn” results of skim in flash, since goal is to make own lists (and flash can’t be reburned at will anyways) • Flexibility: • One event server can have a sub-set of the list and events go to client • Or, better, have total of event server as “one” cache” and event store is managed so that parts of the list which are in other pizza boxes are kept in that cache as opposed to discarded • Question is again how to populate the Flash most effectively • Decompression in event server • Flash bad-block management in event server • Reed-Solomon EDAC in event server • Can consider without cache box: 4000 clients going after the same block, the last one to get data is ~ 300 msec later. Disk Storage Tape 16 16 1 1 Disk Storage Flash Storage Flash Storage PCI-E Skim builder (s) Event Server Event Server Ethernet/PCI-E/etc Optionally direct IO Client (1, 2, or 4 core) Client (1, 2, or 4 core) Client (1, 2, or 4 core) Up to 1,500 cores in ~ 800 units?

  3. Flash-Box, Event Box • Flash Memory Box • 8-, 16-, or 32-Gbit NAND devices • For 1 Terra-Byte need 250 each 32-Gbit devices • All on board, or • 32 G-Byte memory cards (DIMM) • Need > 30 DIMM’s • Preliminary placement on 19-inch rack PCB shows that we can fit 1 Terra-byte on single board • PCI-E to PCI-X bridge (to get 64-bit addressing space ) • No smarts in here • Event (Pizza) Box • 8 F40 Xilinx (each has 2 450-MHz PPC’s) • 16 GBytes of RLDRAM2 • 8 PLX8508 PCI-E switch 5-ports • 2 PLX8532 8-port switch (32 lanes)

  4. Flash & Event Server Boxes • Flash Box • 4Gbit chips: $30, 8Gbit = $60 • 4 Gbyte device quote: $110 min qty 1000 (is 4-1GB die stack) • 1 Peta-Byte: 1,000 boxes total $27 Mill • Event Server

  5. Pizza box block diagram (needs some modification) (In) PCI Express x16 PLX8532 x4 PPC 405 PLX8508 RLDRAM II IGbyte x4 x16 PLX8508 XILNIX XC4VFX40 (Out) PCI Express PLX8532

  6. Event Processing Center file system fabric disks pizza box as skim builder switch HPSS switch(s) switch (s) sea of cores fabric out protocol conversion pizza box in protocol conversion Event processing node

More Related