1 / 7

News from Alberto et al. Fibers document separated from the rest of the computing resources

News from Alberto et al. Fibers document separated from the rest of the computing resources https://edms.cern.ch/document/1155044/1 https://edms.cern.ch/document/1158953/1 Documents finalized (end of August) Power requirements: 100 kW total Racks

blithe
Download Presentation

News from Alberto et al. Fibers document separated from the rest of the computing resources

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. News from Alberto et al. • Fibers document separated from the rest of the computing resources • https://edms.cern.ch/document/1155044/1 • https://edms.cern.ch/document/1158953/1 • Documents finalized (end of August) • Power requirements: 100 kW total • Racks • Racks on Jura side: 800 mm depth (standard CERN racks) • Racks on Saleve side: 1000 mm depth • Dimensions of racks’ base plate is needed (for the supports)

  2. Power • Computing nodes: • 9kW/rack • 2 to 5 racks • Storage nodes: • 5kW/rack • 3 racks • Support: • 1/2 rack, 4kW • DCS,DSS • 4 kW • 2 racks • Network: 4kW • GTK: • 3 kW/rack • 3 racks • + cooling doors: 1 kW/door, 10kW • Total: 68 kW (include 50% safety margin: 95 kW)

  3. Computing nodes • How much total computing power for 2012? • Specify AMD or INTEL? • How many processors? How many cores? • Intel Westmere architecture (32 nm): • Xeon 5600, 6cores • X series, up to 3.33 GHz • E series, up to 2.66 GHz • L series, low-consumption, up to 2.26 GHz • CERN Openlabtested system: • 2×X5670@2.93 GHz, 16 cores • 2×6 GB RAM, 450 W power load (full load), • 238 HEPSPEC06 (24 processes)=20/core • In one 9 kW rack, 20 systems, 4760 HEPSPEC06, approximately 1250 kSi2k • How much RAM? How much local disk space? • How many 10 Gb and 1 Gb ports/machine?(How many switches?)

  4. Storage nodes • How much disk for 2012? • Availability of CMS hardware • 12 disk arrays with 12 disks each and redundant fiber channel interface: • 120 disks WD raptor 300GB (bought at the beginning of the year) • 24 disks 1TB • 2x 32 port fiberchannel switch • 10 DELL servers 2950 with redundant fiber channel cards. • Questions: • More technical details asked to our CMS colleagues • Cost? • Maintenance?

  5. Procurement • Racks and cooling doors • Difficult/not possible to join CERN-IT tender • Network apparata • Handled by CERN-IT, installation, management, maintenance included • Computing & storage nodes • How to handle the purchase/acquisition? (CERN, Mainz?...) • Time plan for purchase and delivery?

  6. Man power • Who and when is going to do/follow/check: • Installation • Commissioning • Operations • More: • What about UPS system purchase/installation? • Remote monitoring of cooling doors (discussion going on with DCS central team

More Related