1 / 17

Implementation of the UCLA Grid Using the Globus Toolkit Grid Center’s 2005 Community Workshop

Implementation of the UCLA Grid Using the Globus Toolkit Grid Center’s 2005 Community Workshop University of California, Los Angeles Kejian Jin (kjin@ats.ucla.edu) Prakashan Korambath( ppk@ats.ucla.edu ). Description of the mechanisms used by your portal to utilize production Grid Resources

ady
Download Presentation

Implementation of the UCLA Grid Using the Globus Toolkit Grid Center’s 2005 Community Workshop

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Implementation of the UCLA Grid Using the Globus Toolkit Grid Center’s 2005 Community Workshop University of California, Los Angeles Kejian Jin (kjin@ats.ucla.edu) Prakashan Korambath(ppk@ats.ucla.edu) Description of the mechanisms used by your portal to utilize production Grid Resources How your portal satisfies security requirements(Authentication, authorization, accounting, auditing) Descriptions of the tools your project used to construct your portal How your project overcame interesting challenges in building your portal. Description of challenges ahead for your project regarding portal development and deployment

  2. Who Are We? • Support Academic Research Computing at UCLA • Host 10+ clusters in our Data Center. • Support Additional Clusters on Campus • Develop software, High Performance Computing Consulting Computational Growth at UCLA (GFLOPS) Campus - 484% IDRE - 1738% October 03 December 04 Projected 05

  3. Head node Cluster I UCLA GRID Architecture Credential mapped to local ID GRID Appliance Head node Cluster II Web Server Credential mapped to local ID Single Credential https Uniform browser based interface GRID Appliance GRID Portal & CA Head node Cluster III Credential mapped to local ID GRID Appliance

  4. History • Started as a UCLA Technology Sandbox project in 2002 using GT2.0. • Developed UCLA Grid Portal (Web-based) using GT3.x in 2003 • Added the first cluster to the UCLA Grid in June 2004 • Grant from Sun Microsystems for 8 Grid Appliance nodes in 2004 • Currently, there are 6 clusters on the UCLA Grid representing Physics, Astronomy, Chemistry, Biology, Social Sciences, Neuro Imaging, Electrical, Chemical and Mechanical Engineering, Material Sciences. • Compute Power Available in the UCLA Grid Portal: • Number of Clusters: 6 • Number of Nodes: 383 • Aggregate Peak Performance: 5625 GFlops • Began to port the UCLA Grid Portal to GT4 in April 2005.

  5. Features • Automatic certificate signing process • SSH web client is used for verifying the user identity. • Resource Discovery • IndexService is used to retrieve information from participating clusters • Job Submission • Generic parallel and serial job submission • Customized application submission services for: Gaussian, Qchem, xmd, Mathematica, Matlab • GRAM service used for job submission • SGE local scheduler is used by all clusters, other supported. • Data management • Upload file from local to any target cluster • Cluster File Manager • File transfer between different clusters • GridFTP is used extensively in this service

  6. Resource Discovery

  7. Cluster File Manager

  8. Data Visualization • File Formats supported • Gamess • Gaussian • Q-chem • CML • PDB • Ghemical • XYZ • CIF • HIN • Jaguar • MOL • MOPAC • Spartan

  9. Live Demohttp://grid.ucla.edu

  10. Technologies Used • Java Servlet • Java COG toolkit • SSH web client API • File Upload client API • Java Web Service • XML • Globus Toolkit

  11. Challenges • Many clusters on campus • Operational issues • Different departments • Diverse procedures and resources • schedulers, different OSs, different processors, apps. • Some clusters will be contributing cycles to the campus • How to share resources • Users can have (different) ID’s among several clusters • No common user ID (UID) space • Difficult to get job status and resource information • From a single cluster • Across clusters

  12. Experiences • Expert users prefer command line interface • Comfortable with UNIX command line • Need to login to head node in order to compile. • lack of web-based development environment • New Users prefer the web interface such as File Manager to edit/create/upload files • Users with multiple cluster accounts prefer UCLA Grid Portal because of the single-login (transparency). • Cluster managers and PIs like the resource discovery interface to get a visual feedback of cluster status, usage, job information, etc.

  13. Future Directions • Web-based Development environment • Edit source code with syntax (C, Fortran, C++, Java, etc ) • Compile/Debug code from the web • Highly transparent development environment • Technologies that will be used: XMLHttpRequest, GridFTP, Java CoG. • Immediate Testing and Feedback • Additional Visualization support such as visualizing Plasma Physics data • Add dynamic resource discovery and meta-scheduling for Sun Grid Engine (SGE) using Community Scheduler Framework (CSF).

  14. Future Directions - Continued • Integrate the UCLA Grid with other Grids by writing an InterGrid Broker Service • Clusters are usually behind firewalls • In our infrastructure, the appliance node (where GT is installed) is only accessible from UCLA Grid Portal Web server for security reasons. Head node does not have GT installed. • User has account in another Grid which is trusted by UCLA. • User has account on a cluster in the UCLA Grid • User wants to submit job from other Grid Portal to one of the participating cluster in the UCLA Grid where he/she has an account. • But other Grid Portal cannot directly submit jobs to the cluster in the UCLA Grid because of the firewall. • How do we solve this problem?

  15. InterGrid Broker Service Cluster 1 Cluster A Other Grid Portal UCLA Grid Portal Cluster B Cluster 2 InterGrid Broker Service Cluster C Cluster 3

  16. Why not use other Portals? • No other portals were available when the project started. • Specific User Requirement • a large percentage of our users want a specific application service • No customized application service was available • Only interested in Cluster Computing in a parallel environment.

  17. Questions?

More Related