1 / 10

AMS02 SOC. Changes during last TIM.

AMS02 SOC. Changes during last TIM. A.Eline, V. Choutko. New Hardware:. 1 FileServer Dell 1950 with 2 QuadCore Intel X5460 3.16 GHz processors, 1333 MHz FSB, DRAC5. 1 additional FC Switch Qlogic 5600 (working as load balancer together with old one). Software changes:.

adonis
Download Presentation

AMS02 SOC. Changes during last TIM.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AMS02 SOC. Changes during last TIM. A.Eline, V. Choutko

  2. New Hardware: 1 FileServer Dell 1950 with 2 QuadCore Intel X5460 3.16 GHz processors, 1333 MHz FSB, DRAC5. 1 additional FC Switch Qlogic 5600 (working as load balancer together with old one)

  3. Software changes: Operating systems on all file servers, DB servers and AMS gateway were reinstalled and now runs 64bit SLC4. DB servers running 64bit Oracle 10g.

  4. Cluster file system: Since last TIM, when we received quotation for CXFS solution (65 KCHF) we decide to evaluate GFS. We found that - it’s stable file system, work without any problem during 2 months; • Some difficulties with GFS synchronization if one or few nodes crashed not to lead to file system corruption.

  5. 5 64bit SLC4 computers with GFS tools installed are members of AMS cluster(each host connected to both switches, and switches connectedto both Arrays) RAID 5 10TB GFS RAID 5 40TB GFS FC Switch 1 FC Switch 2 DB Server 1 DB Server 2 AMS Gateway File server 1 File server 2

  6. AMS Storage solution: So, now we have two external RAID Arrays – 10 and 40 TB. On both arrays we use GFS, it allows a cluster of computers to simultaneously use a block device that is shared between them. GFS reads and writes to the block device like a local filesystem, but also uses a lock module to allow the computers coordinate their I/O so filesystem consistency is maintained. One of the nifty features of GFS is perfect consistency -- changes made to the filesystem on one machine show up immediately on all other machines in the cluster.

  7. Performance of r/w operations: R/W performance of GFS very close to the results for XFS: Read: ~470 MB/s (4 GB file); 135 MB/s (16 GB file) Write: ~130 MB/s (4 GB file); 70 MB/s (16 GB file) but advantage which give us GFS - possibility for members of cluster to simultaneously use this device may increase writing speed. For example: 125 MB/s for 2 hosts; 165 MB/s for 3 hosts; 210 MB/s for 4 hosts. All for 16 GB files

  8. LSF: Since end of summer LSF is installed on the ams cluster. Now we have one master node, which operate and as execute host too and two clients (execute hosts) Once a job is submitted from AMS gateway host (ams.cern.ch) our batch scheduler dispatches the job an appropriate exec node, depending on current load conditions and the resource requirements of the job. So now all analyses and simulations jobs have to be done only by batch system. More info about using LSF can be found at: http://ams.cern.ch/AMS/7

  9. summary Structure of the AMS scientific operational centre is clear now and in future we will simply extend separate parts depending of our needs. We hope that at flight time AMS cluster will looks like:

  10. All members of AMS GFS cluster are connected to FC switches and have direct access to all arrays.These arrays can be exported via NFS to production nodes. RAID 6 126TB GFS RAID 6 126TB GFS RAID 6 126TB GFS RAID 6 126TB GFS RAID 5 10TB GFS RAID 5 40TB GFS FC Switch 1 FC Switch 2 DB Server 1 DB Server 2 AMS Gateway File Server 1 File Server 2 File Server 3 File Server 4 Ethernet (NFS) Production Node 1 Production Node 2 Production Node 3 Production Node 6 Already exist Plan for 2009 year Plan for 2010 – 2012 years

More Related