1 / 21

We included one line that caused the responses to vary enormously:

Extract from the tender:. 6 multi-cpu systems to be used as compute servers. These must be based on Intel Pentium CPU's and include a RAID-based disk storage subsystem. Approximately 6TB of usable space disk space is required across these systems.

munin
Download Presentation

We included one line that caused the responses to vary enormously:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Extract from the tender: • 6 multi-cpu systems to be used as compute servers. These must be based on Intel Pentium CPU's and include a RAID-based disk storage subsystem. Approximately 6TB of usable space disk space is required across these systems. • 24 dual processor PC's of varying specification. This lot is divided into sub lots, one for delivery to the UK, the other for the USA. The sub lots may be bid for separately. • 5TB of disk storage (to be placed on an existing Fibre channel based storage subsystem in the USA). • 4 tape libraries for delivery in the UK We included one line that caused the responses to vary enormously: The system should be able to support substantial IO loads from file system to CPU (probably in the range 100-300MB/sec).

  2. The Tender – Probable solutions • 4 University Sites • 8 way multi processor 700Mhz Xeon system with 2GB RAM with 0.5 TB direct attached Fibre Channel Raid disks. • RAL • Two of the above servers • 2.5 TB disk, probably using a FC switch based SAN • 16 Farm nodes (dual proc. Pizza box style?) • Fermilab Chicago • 5TB disks to attach to existing SGI’s large disk storage • 8 High powered workstation class PC’s (Dual PIII, IDE 180GB disk) • Tape systems have been postponed as Fermi have had trouble with Exabyte Mammoth drives.

  3. Why Multi CPU not Farm? • System is to support multiple physicists working simultaneously on different data sets. (Analysis not MC.) • All disks are locally attached and hence high IO bandwidth. • Easier to manage one large system than a farm? • Large on chip cache and large memory should provide excellent performance. • Large chunk of money allows you to consider buying this type of well engineered high end system. Which can be expanded later with commodity price PC’s to provide a farm. • Farms well suited to Monte Carlo work - RAL 16 node farm.

  4. How much ?? • Server of this type cost from £25K (to £50K+. • Fibre Channel disk subsystems >= £15K • Prices vary from vendor to vendor. Bids for the total contract varied from £800K to greater than £2Million. • We hope to spend less than £700K

  5. Evaluation Kit • Intel 8 way 500Mhz Xeon tested earlier (October 2000) • Dell 8450 8 way 700Mhz Xeon with 2GB Ram, 2*9GB scsi raid disks, with dell 650F FC disk storage unit containing 10 * 36GB disks. ( Installed Fermi 6.1.1, tested cdf code and ran bonnie benchmarks plus others.) • Benchmarking IBM kit in Greenock, Scotland next week. • Other contenders Compaq, Hitachi, Solution Centre, Kingswell, Pars. • Tender advertised in the EU journal.

  6. Benchmarking • Main aim was to have high IO bandwidth • FC should give 100Mbytes/sec • Bonnie test difficult (First rule is use files much larger than RAM size. Difficult when you have 2GB RAM as MAX file size on ext2 is 2GB.) • Simple copying tests hampered by linux caching. • Physics tests also had to be modified to avoid caching skewing the results. • CPU tests scale well. IO perf. Less than expected. Will be clearer when we have used other equipment.

  7. 4 Universities Glasgow, Liverpool, Oxford and UCL. One large server with 0.5 TB fully mirrored raid disks. (ie 1TB) Disk subsystem probably based on Fibre Channel Disks to provide high I/O throughput.

  8. IBM

  9. RAL system is double a university system but with 2.5TB useable disk.

  10. RAL Farm of 16 dual CPU systems The 1U servers available enable efficient use of space and plenty of expandability. Phase two will be in 12-18 months time. Roughly same spend.

  11. IBM

  12. The High Bandwidth, High End, High Costs Solution Some vendors chose to aim at the 300MB/s end of our suggested IO bandwidth range. Three vendors chose to offer the LSI Logic MetaStor storage system. This is capable of delivering 350Mbytes/sec bandwidth. Cost of these system was roughly twice that offered by Dell/IBM. Cost of expansion was over double street price per disk.

  13. LSI Logic MetaStor High End Solution offered by some vendors. Provides 350MB/s performance at a price!! Cost of individual disks FC 73GB disk £2000 Street price £700 IBM price £590

  14. Fermilab Disks • This lot was tightly specified to match existing systems at Fermi. • They use Chaparral FC controllers connected to scsi disks all in one Kingston(?) shelf. All cables are internal to the shelf, all that comes out is power and fibre. • Vendors that offer alternatives are less favoured.

  15. This controller is used by Fermilab and is the preferred option for the disks there. It can be placed in the rack along with 9 disks to keep all cables internal. Just Fibre channel comes out.

  16. The third option • A new chaparral controller - 200MB/sec FC to Ultra 160 SCSI disks. • This brand new controller could offer increased bandwidth. • Requires new FC HBA’s - availability? • Size of vendor a worry.

  17. Conclusions • Tendering is a tricky business • Hard to predict • Very time consuming • Should make a decision ‘Real Soon Now’

More Related