1 / 17

Parallel Hardware for Sequence Comparison and Alignment

Parallel Hardware for Sequence Comparison and Alignment. By Richard Hughey University of California, Santa Cruz Presented by Travis Brown. Introduction. Use dynamic programming Compare shorter sequences first Sequence comparison is O( n 2 )

wyman
Download Presentation

Parallel Hardware for Sequence Comparison and Alignment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parallel Hardware for Sequence Comparison and Alignment By Richard Hughey University of California, Santa Cruz Presented by Travis Brown

  2. Introduction • Use dynamic programming • Compare shorter sequences first • Sequence comparison is O(n2) • O(n2/log n) version is available, not not practical for parallelization • Use standard local alignment algorithm to find best score

  3. Course Grain Approach • Suitable for large database searches • Data is divided evenly among all PEs • Multiple independent analysis is performed • Not suitable for small databases because there is not enough data for all the Pes • Method of choice for non-specialized processors

  4. Fine Grain Approach • O( n ) processing element are used to compare two sequences in O( n + N ) time • Calculations along a single diagonal can all be performed at once • Assign on PE to each character of the query string • Shift the database through linear array of PEs

  5. Fine Grain Approach (cont.) • Each ci,j is dependent only on ci-1, j-1 so each row in the matrix can be computed simultaneously • Entire calculation can be completed in N time steps on n Pes • Method of choice for special-purpose processors

  6. Architectures • There are 5 major architectures that can be used for sequence analysis • Workstations • Supercomputers • Single-purpose VLSI • Reconfigurable Hardware • Programmable co-processors

  7. Workstation • Used together as a Network of Workstations • Best used for coarse-grain problems • Fairly inexpensive and can be used for many other tasks

  8. Supercomputer • Most flexible means of fast sequence analysis • Very costly • SIMD works well • MIMD performs only slightly better than the 5 Alphas

  9. Single Purpose VLSI • Highest performance for a single algorithm • Inexpensive (~$12,000) • This is the method of choice for • BioScan (812 PEs per chip) • Fast Data Finder (5 board, 3360 PE)

  10. Reconfigurable Hardware • Based on FPGAs (Field Programmable Gate Arrays) • Generally have a higher cost than Single Purpose VLSI machines

  11. Programmable co-processors • Cost of these systems is more than Single Purpose VLSI, but less than Reconfigurable Hardware • Have hardware dedicated to performing simple tasks (i.e. adding 2 numbers)

  12. Cost of Systems • Several of these systems have not been built, or not completed, so costs are estimates only • Commercial and research machines are expandable • Faster systems come at a higher cost

  13. Discussion • Some systems are faster under different conditions • This makes evaluating systems difficult • Different algorithms produce different results • The algorithms change over time, and so do the system requirements

  14. Conclusion • There is no “best” solution for any problem • Cost/Performance is important, but difficult to measure • Specialized, yet programmable hardware seems to be the best solution

  15. Biography • Richard Hughey • Associate Professor and Chair of Computer Engineering at University of California, Santa Cruz • Email: rph@cse.ecsc.edu

  16. Questions/Comments

  17. Thank-you

More Related