1 / 15

Cluster currently consists of: 1  Dell PowerEdge 2950

Cluster currently consists of: 1  Dell PowerEdge 2950 3.6Ghz Dual, quad core Xeons (8 cores) and 16G of RAM Original GRIDVM - SL4 VM-Ware host 1 Dell PowerEdge SC1435 2.8Ghz Dual quad-Core Opteron's (8 Cores) and 16G of RAM File Server, with 8.6 TB of disk space

yama
Download Presentation

Cluster currently consists of: 1  Dell PowerEdge 2950

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cluster currently consists of: 1  Dell PowerEdge 2950 3.6Ghz Dual, quad core Xeons (8 cores) and 16G of RAM Original GRIDVM - SL4 VM-Ware host 1 Dell PowerEdge SC1435 2.8Ghz Dual quad-Core Opteron's (8 Cores) and 16G of RAM File Server, with 8.6 TB of disk space 11 Dell PowerEdge SC1435 2.8Ghz Dual quad-Core Opteron's (8 Cores) and 16G of RAM Worker nodes 16 Dell  PowerEdge M605 blades 2.8Ghz Dual six-Core Opteron's (12 Cores) and 32G of RAM Worker nodes Total: 296 cores

  2. UJ-ATLAS : ATHENA installed, using Pythia event generator to study various Higgs scenarios, DST / NRF Research Infrastructure

  3. Diamond Ore Sorting (Mineral-PET)‏ S Ballestrero, SH Connell, M Cook, M Tchonang, Mz Bhamjee + Multotec GEANT4 MonteCarlo Online diamond detection Monte Carlo simulation Online diamond detection

  4. Diamond Ore Sorting (Mineral-PET)‏ Simulation of radiation dose as a function of position from a body of radioactive material Full physics Monte-Carlo Simplified numerical model PET point source image - automatic detector parameter tweaking Misaligned – before optimisation After optimisation

  5. Monte Carlo (GEANT4) Particle Tracking – Accelerator Physics, Detector Physics

  6. The stellar astrophysics group at UJ Astrophysics projects on the UJ fast computing cluster: A large group of researchers, co-led by Chris Engelbrecht at UJ, involving three SA institutions, an Indian institution and eight postgraduate students in total, are working on important corrections to current theories of stellar structure and evolution. To this end, we use the UJ cluster for two essential functions: Performing Monte Carlo simulations of random processes in order to determine the statistical significance of purported eigenfrequency detections in telescope data. A typical run takes about 48 hours running on the majority of the cluster nodes. Running stellar models containing new physics in the stellar structure codes (not in use yet, but implementation later in 2013 expected)

  7. The stellar astrophysics group at UJ Astrophysics projects on the UJ fast computing cluster: A large group of researchers, co-led by Chris Engelbrecht at UJ, involving three SA institutions, an Indian institution and eight postgraduate students in total, are working on important corrections to current theories of stellar structure and evolution. To this end, we use the UJ cluster for two essential functions: Performing Monte Carlo simulations of random processes in order to determine the statistical significance of purported eigenfrequency detections in telescope data. A typical run takes about 48 hours running on the majority of the cluster nodes. Running stellar models containing new physics in the stellar structure codes (not in use yet, but implementation later in 2013 expected)

  8. The stellar astrophysics group at UJ Astrophysics projects on the UJ fast computing cluster: A large group of researchers, co-led by Chris Engelbrecht at UJ, involving three SA institutions, an Indian institution and eight postgraduate students in total, are working on important corrections to current theories of stellar structure and evolution. To this end, we use the UJ cluster for two essential functions: Performing Monte Carlo simulations of random processes in order to determine the statistical significance of purported eigenfrequency detections in telescope data. A typical run takes about 48 hours running on the majority of the cluster nodes. Running stellar models containing new physics in the stellar structure codes (not in use yet, but implementation later in 2013 expected)

  9. Successful test : 20 September 2012 CHAIN Interoperability, where some of SA Grid participated UJ, UFS, UCT and CHPC Shown below – gLite sites in SA

  10. Feature of the UJ Research Cluster Maintain interoperability on 2 Grids : OSG and gLite Virtual machines (compute element and user interface for each platform) Shown below – OSG sites

  11. Currently in the middle of an upgrade: Nodes and virtual machines running a spread of Scientific Linux 4, 5 and 6 to keep services online. System administrator is a South African currently based at CERN in Europe. Able to administer the cluster using remote tools. Using Pixie and Puppet, can reboot a node and reinstall to any version of Scientific Linux and EMI (European middleware initiative) within 45 minutes.

  12. Trying to maintain usability by - SAGrid - ATLAS (Large Hadron Collider) - ALICE (Large Hadron Collider) - e-NMR (Bio-molecular) - OSG ATLAS jobs running for last 9 months, in production queue (in test mode) for last 4 weeks. Difficult to keep both OSG and gLite running – when the one demands an upgrade, the other breaks. Important though – grids are all about joining computers; we are helping to keep compatibility between the two big physics grids. Currently on the to-do list: Finish partially completed Scientific Linux upgrade Return OSG to functional status Set up IMPI implementation – allow complete remote control at lower level than OS.

More Related