1 / 12

BioPilot: Data-Intensive Computing for Complex Biological Systems

BioPilot: Data-Intensive Computing for Complex Biological Systems. Nagiza Samatova Computer Science Research Group Computer Science and Mathematics Division Sponsored by the Department of Energy’s Office of Science Advanced Scientific Computing Research

odetta
Download Presentation

BioPilot: Data-Intensive Computing for Complex Biological Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. BioPilot: Data-Intensive Computingfor Complex Biological Systems Nagiza Samatova Computer Science Research Group Computer Science and Mathematics Division Sponsored bythe Department of Energy’s Office of ScienceAdvanced Scientific Computing Research Scientific Discovery through Advanced Computing

  2. Biological research is becoming a high-throughput, data-intensive science, requiring development and evaluation of new methods and algorithms for large-scale computational analysis, modeling, and simulation Goals for enabling data-intensivecomputing in biology Computational algorithms for proteomics • Improve the efficiency and reliability of protein identification and quantification • Peptide identification using MS/MS using pre-computed spectral databases • Combinatorial peptide search with mutations, post- translational modifications and cross-linked constructs Inference, modeling, and simulation of biological networks • Reconstruction of cellular network topologies using high-throughput biological data • Stochastic and differential equation-based simulations of the dynamics of cellular networks Integration of bioinformatics and biomolecular modeling and simulation • Structure, function, and dynamics of complex biomolecular system in appropriate environments • Comparative analysis of large-scale molecular simulation trajectories • Event recognition in biomolecular simulations • Multi-level modeling of protein models and their conformational space

  3. Routine protein simulations generate TB trajectories. Routine analysis tools (ED) require multiple passes. Correlation analyses have random access patterns. Comparative analyses require multiple/distributed trajectory access. NWCHM Integral API ENERGY QMD Classical Force Field DFT Geometry GRADIENT QM/MM Comparative Trajectory Analysis Tools SCF: RHF UHF ROHF Basis Sets OPTIMIZE ET MP2: RHF UHF PEigS QHOP DYNAMICS MP3: RHF UHF pFFT THERMODYNAMICS MP4: RHF UHF LAPACK INPUT ESP RI-MP2 BLAS PROPERTY VIB CCSD(T): RHF MA PREPARE CASSCF/GVB Global Arrays ANALYZE MCSCF ecce MR-CI-PT ChemIO CI: Columbus Full Selected Comparative molecular trajectoryanalysis with BioSimGrid The scientific challenge: Integrated molecular simulationand comparative moleculartrajectory analysis: The analysis of molecular dynamicstrajectories presents a large,data-intensive analysis problem: • Enzymatic reaction mechanisms • Molecular machines • Molecular basis of signal and material transport T. P. Straatsma, Computational Biology and Bioinformatics

  4. Building accurate protein structures • Homology models can be built from related protein structures, but even with proper alignment, usually have about 4A RMS error–too large to permit meaningful computational simulation/molecular dynamics.Goal is to reduce the errors in initial models. d • We multi-align the group of neighbors using sequence and structure information to find the most stable parts of the domain. Extracted stable core structure is 2030% closer in terms of RMSD to the target than to any of the original templates. • More flexible parts of target are modeled locally by choosing most sequentially similar loops from the library of local segments found in all the homologues. • A genetic algorithm conformation search strategy using Cray XT4 uses backbone and sidechain angles as parameters. Each GA step is evaluated by minimization. A computed template compared with the target, improved from 3.4A initially to 2.3A E. Uberbacher, Biological and Environmental Sciences Division

  5. Ultra-scale docking. The Bayesian potentials allow exploration of 100s and 1000s of protein complexes instead of one or two. Docking from independently crystallized subunits. We demonstrated excellent results on complexes reconstructed from ~500 independently crystallized subunits. Whole genome predictions. Developed technology has been used to predict 1000s of protein complexes on the genome-wide level in several organisms. Analysis of ultra-large structural ensembles Best structure. The set with the largest common kernel always includes nearly the best native structure present in the ensemble of 10,000 folded structures. Quality estimator.The size of the largest common kernel (obtained from the connectivity graph) provides an excellent estimate of how close the selected structure is to the native ones. A. Gorin, Computer Science and Mathematics Division

  6. R R R R Predictive proteome analysis using advanced computation and physical models Data sizes: 30,000 to 200,000 spectra from a single experiment to becompared with as many as millions of theoretical spectra Scientific challenge: • Computational tools identify only 1020%of spectra from proteomics analysis. • Fragmentation patterns are highly sequence-specific. • Statistical characterization of noise essential. • Identification rate increased by 23–40%. NH3-HCa-CO- NH-HCa-CO- NH-HCa-CO- NH-HCa-COOH Increase in identified peptides 20%@5% False Pos. Rate 40%@1% False Pos. Rate W. R. Cannon, Computational Biology and Bioinformatics

  7. In-depth analysis of MS proteomicsdata flows Mass-spectrometry-based proteomics is one of the richest sources of the biological information, but the data flows are enormous (100,000s of samples each consisting of 100,000s of spectra). Computationally, both advanced graph algorithms and memory-based indexes (> 1 TB are required forin-depth analysis of the spectra). Two principally new capabilities are demonstrated: • Quantity of identified peptides is 50% increased. • Among highly reliable identifications, increase is many-fold. A. Gorin, Computer Science and Mathematics Division

  8. work factor ideal ScalaBLAST 6 5 Scientific challenge: 4 Queries per (proc*minute) 3 • Standalone BLAST application would take >1 year for IMG (>1.6 million proteins) vs IMG and >3 years for IMG vs nr. • “PERL script” approach breaks even modest clusters becauseof poor memory management. 2 1 0 0 50 100 1000 10000 Number of processors Good scaling on both shared and distributed memory architectures 40 120 35 IMG 1.6 30 80 25 Run-time (hours) 20 Scaling factor 15 MPP2 ALTIX HTCBLAST 40 IMG 2.2 10 5 0 0 0 1 2 3 4 5 6 7 8 0 50 100 150 Trillions of pairwise comparisons Number of processors C. S. Oehmen, Computational Biology and Bioinformatics

  9. After: ~1GB pGraph Merge cliques Find cliques Before: ~2TB 2,109 vertices 16,169 edges 851 “common-target” associations 4123 modules Module Association pGraph: Parallel Graph Library for analysis of biological networks Major features: • Memory reduction by 1000 times • Scalability to 100s of processors • FPT-based search space reduction theory Nagiza Samatova, Computer Science and Mathematics Division

  10. 2007, Sep 7 27(5): 793 ECF regulation  anti- S1 1 ASD S2 2 ASD 3 S3 ASD S4 4 ASD Discovering cell networks with structure and genome context The structure of the ChrR anti-sigma factor from Rhodobacter sphaeroides reveals domains which help explain the combinatoric complexity of gene regulation in response to environmental conditions. An advanced toolkitfor graph, genome context, and visual analytics was used to study ECF sigma factor regulation. The conserved ASD domain links ~30% of ECF sigma and anti-sigma pairs in the signal transduction cascade, e.g., response to iron and 1O2 with FecIR, RpoE-ChrR proteins. Many combinations of different sigma () and sensor (S) domains exist. Two levels ofpositional clustering,(1) domain and (2) gene neighbor, generate vast permutations between protein pairs. 3D structure of ChrR anti-sigma regulator FecI FecR RpoE ChrR Sofia Heidi, Computational Biology and Bioinformatics

  11. Stochastic simulations of biological networks Scientific challenge: • To model microbial communities, the smallest realistic model would include >109 cells x 100 reactions ~1011 reactions. • Currently can handle ~104–106 reactions on a single-processor machine. 1500 Probability-weighted dynamic Monte Carlo method… 1250 1000 • J. Phys. Chem. B 105, 11026, 2001 • Speedup over exact Gillespie SSA ~25 Multinomial Tau-leaping method: • Speedups ~40–200 Average running time (seconds) 750 500 250 0 1 2 3 4 5 6 7 8 9 10 11 12 Number of phosphorylation binding sites H. Resat, Computational Biology and Bioinformatics

  12. Contacts Tjerk Straatsma Principal Investigator • Pacific Northwest National Laboratorytps@pnl.gov • Nagiza Samatova Principle Investigator • Oak Ridge National Laboratorysamatovan@ornl.gov • Christine Chalk • Program Manager • U.S. Department of Energy • Office of Science • Advanced Scientific Computing Research, SciDACchristine.chalk@science.doe.gov 12 Samatova_BioPilot_SC07

More Related