ide disk servers at cern n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
IDE disk servers at CERN PowerPoint Presentation
Download Presentation
IDE disk servers at CERN

Loading in 2 Seconds...

play fullscreen
1 / 16

IDE disk servers at CERN - PowerPoint PPT Presentation


  • 116 Views
  • Uploaded on

IDE disk servers at CERN. Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003. Introduction. HEP computing in the past: mostly reading from, processing, and writing to (tape) files sequentially Mainframe era (until ~ 1995): single machine, CPUs, tape drives, little disk space

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'IDE disk servers at CERN' - les


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
ide disk servers at cern

IDE disk servers at CERN

Helge Meinhard / CERN-IT

CERN OpenLab workshop

17 March 2003

introduction
Introduction
  • HEP computing in the past: mostly reading from, processing, and writing to (tape) files sequentially
  • Mainframe era (until ~ 1995): single machine, CPUs, tape drives, little disk space
  • In response to scaling problem, development of SHIFT architecture (early 1990s)
    • Scalable farm out of ‘commodity’ components
      • RISC CPUs (PowerPC, MIPS, Alpha, PA-RISC, Sparc)
      • SCSI disks
shift architecture
SHIFT architecture

Diskserver

Batch

and

diskSMP

Network- FDDI- Hippi- Myrinet- Ethernet

Tapeserver

Network- Ethernet

Batchserver

Interactiveserver

pc batch nodes
PC batch nodes
  • 1995: First studies at CERN of PCs as batch nodes (Windows NT)
  • 1997 onwards: Rapidly growing interest in Linux (on IA32 only)
  • 1998/99: First production farms for interactive and batch services running Linux on PC hardware at CERN
pc disk servers
PC disk servers
  • 1997/98: Prototypes with SCSI disks
  • 1998/99: Prototypes with EIDE disks
    • Different IDE adapters
    • Not RAIDed
  • 1999/2000: First Jumbo servers (20 x 75 GB) put into production
  • 2001: First rack-mounted systems
  • 2002: 97 new servers (54 TB usable)
  • 2003: 1.3 TB usable in one server at 13 kCHF
  • Total usable capacity today: ~ 200 TB
slide10

Disks only

EIDE/PC

SCSI/

RISC

Complete systems

slide12

Gross

Usable

today s servers specifications
Today’s servers: Specifications
  • 19” rackmount, IA32 processor(s), 1 GB, 2x80 GB system disks, GigE (1000BaseT), redundant power supplies
  • >500 GB usable space on data disks
    • Hardware RAID offering redundancy
    • Hot-swap disk trays
  • Performance requirements network – disk:
    • 50 MB/s reading from server @ 500 GB
    • 40 MB/s writing to server @ 500 GB
  • 5 years on-site warranty
lessons learnt
Lessons learnt
  • Capacity is not everything, for good performance need
    • CPU, memory, RAID cards
    • Good OS and application software
    • Network connectivity
    • Large number of spindles
  • Firmware of RAID controllers and disks critical
  • Redundancy (RAID) is a must, required performance possible only with mirroring (RAID 1) so far
outlook
Outlook
  • Good price/performance has risen interest in other application domains at CERN
    • AFS and MS DFS servers
    • Web servers, mail servers
    • Software servers (Linux installation)
    • Data base servers (Oracle, Objectivity/DB)
  • Access pattern of physics analysis likely to change
  • Investigating different file systems (XFS), RAID 5 (in software), …
  • Architecture constantly being reviewed
    • Alternatives investigated: data disks scattered over large number of batch nodes; SAN
conclusion
Conclusion
  • Architecture of early 1990s still valid
    • May even carry us into LHC era…
  • Important improvements made
    • Price/performance
    • Reliability (RAID)
  • Will review architecture soon (2003)
    • New application areas
    • New access patterns for physics analysis