1 / 6

Configuration and Deployment of a scalable virtual machine cluster for molecular docking

Configuration and Deployment of a scalable virtual machine cluster for molecular docking. Nara Institute of Science and Technology, Nara Prefecture, Japan. Karen Rodriguez 8 /14/2013. Overview.

chipo
Download Presentation

Configuration and Deployment of a scalable virtual machine cluster for molecular docking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Configuration and Deployment of a scalable virtual machine cluster for molecular docking Nara Institute of Science and Technology, Nara Prefecture, Japan Karen Rodriguez 8/14/2013

  2. Overview • Virtual machines (VMs) have been observed to yield molecular docking results that are far more consistent than those obtained from a grid configuration. • Inhomogeneous results obtained from a grid are thought to be due to physical differences in the cluster’s components. This is eliminated by creating and networking cloned VMs. • This study’s objectives consist of constructing a clustered VM environment that is scalable according to job demand and which yields consistent dock results. • This system is to be tested, packaged, and deployed on the PRAGMA grid. This will provide sufficient computing resources to perform a full-scale docking project of large protein databases.

  3. Week 7: • PRAGMA Admin uploaded the VM images to the Rocks cluster (UCSD) and modified them to allow remote access. • The machines required minimal modification before they could be tested (Nadya had already configured network scripts, we only had to stop the firewall and edit the machine file to include their new IP addresses). • These machines (Masternode and Slavenode1) were tested using the dock 6.2 test suite. Results match those obtained when the machines were tested from the physical host. • At the suggestion of Nadya, the VM images were modified (from the original physical host, not the uploaded copies). They were allocated more storage capacity (from 8G to 28G) and a user account was implemented in both machines to protect VM settings. • Users can log into account “duck1” and run Dock and MPICH from there. Any modifications will have to go through root account, which is not accessible to everyone through ssh. • Having a separate root account also makes VM managing easier.

  4. Future Plans • The images of the modified machines will be sent to Nadya. These will be uploaded onto the Gfarm pool, where administrators of remote clusters will be able to retrieve and deploy them. • This will be done with the help and guidance of PRAGMA administrator Nadya Williams, who has already uploaded the machines to the UCSD server and is familiar with making the changes necessary to get them to work. • We will also be communicating with the admin of remote clusters to provide the necessary information to tailor deployment scripts to our VMs. • Machines will be networked with the aid of VPN tools once they are all deployed. • The purpose of this is to designate ports that can be accessed by MPICH for parallel processing. • These ports can be exempt from firewall settings, such that it is still active (for the protection of the VMs) but the machines will still be able to pass messages through.

  5. Kobe

  6. Acknowledgments • Mentors • Dr. Jason Haga, UC San Diego Bioengineering • Dr. Kohei Ichikawa, Nara Institute of Science and Technology • UCSD PRIME Program • Dr. Gabrielle Wienhausen • Dr. Peter Arzberger • Teri Simas • Jim Galvin • Tricia Taylor • Funding • NAIST • Japanese Student Services Organization (JASSO) • PRIME • PRIME alumna Haley Hunter-Zinck • National Science Foundation • Special Thanks to Nadya Williams, Olivia Yang and Josh Wei

More Related