1 / 44

Conditional Random Fields

Conditional Random Fields. A brief description. 1. 1. 1. 1. …. 2. 2. 2. 2. …. …. …. …. …. K. K. K. K. …. HMMs and CRFs. 1. Features used in objective function P(x,  ) for an HMM: a kl , e k (b), where b  

Download Presentation

Conditional Random Fields

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Conditional Random Fields A brief description

  2. 1 1 1 1 … 2 2 2 2 … … … … … K K K K … HMMs and CRFs 1 • Features used in objective function P(x, ) for an HMM: • akl, ek(b), where b   • Vl(i) = Vk(i – 1) + (a(k, l) + e(l, i)) = Vk(i – 1) + g(k, l, xi) • Let’s generalize g!!! Vk(i – 1) + g(k, l, x, i) 2 2 K x1 x2 x3 xN

  3. CRFs - Features i-1 i • Define a set of features that may be important • Number the features 1…n: f1(k, l, x, i), …, fn(k, l, x, i) • features are indicator true/false variables • Train weights wj for each feature fj • Then, g(k, l, x, i) = j=1…nfj(k, l, x, i)  wj x7 x8 x9 x10 x1 x2 x3 x4 x5 x6

  4. 1 2 3 4 5 6 … x1 x2 x3 x4 x5 x6 1 2 3 4 5 6 … x1 x2 x3 x4 x5 x6 “Features” that depend on many pos. in x • Score of a parse depends on all of x at each position • Can still do Viterbi because state i only “looks” at prev. state i-1 and the constant sequence x HMM CRF

  5. Conditional Training • Hidden Markov Model training: • Given training sequence x, “true” parse  • Maximize P(x, ) • Disadvantage: • P(x, ) = P( | x)P(x) Quantity we care about so as to get a good parse Quantity we don’t care so much about because x is always given

  6. Conditional Training P(x, ) = P( | x)P(x) P( | x) = P(x, ) / P(x) Recall Fj(x, ) = # times feature fj occurs in (x, ) = i=1…Nfj(k, l, x, i) ; count fj in x,  In HMMs, let’s denote by wj the weight of jth feature: wj = log(akl) or log(ek(b)) Then, HMM: P(x, ) =exp[j=1…nwj  Fj(x, )] CRF: Score(x, ) =exp[j=1…nwj  Fj(x, )]

  7. Conditional Training In HMMs, P( | x) = P(x, ) / P(x) P(x, ) =exp[wTF(x, )] P(x) = exp[wTF(x, )]=: Z Then, in CRF we can do the same to normalize Score(x, ) into a prob. PCRF( | x) = exp[j=1…nwjF(j, x, )]/ Z QUESTION: Why is this a probability???

  8. Conditional Training • We need to be given a set of sequences x and “true” parses  • Calculate Z by a sum-of-paths algorithm similar to HMM • We can then easily calculate P( | x) • Calculate partial derivative of P( | x) w.r.t. each parameter wj (not covered—akin to forward/backward) • Update each parameter with gradient descent • Continue until convergence to optimal set of weights P( | x) = exp[j=1…nwjF(j, x, )]/ Zis convex

  9. Conditional Random Fields—Summary • Ability to incorporate complicated non-local feature sets • Do away with some independence assumptions of HMMs • Parsing is still equally efficient • Conditional training • Train parameters that are best for parsing, not modeling • Need labeled examples—sequences x and “true” parses  (Can train on unlabeled sequences, however it is unreasonable to train too many parameters this way) • Training is significantly slower—many iterations of forward/backward

  10. DNA Sequencing

  11. Some Terminology insert a fragment that was incorporated in a circular genome, and can be copied (cloned) vector the circular genome (host) that incorporated the fragment BACBacterial Artificial Chromosome, a type of insert–vector combination, typically of length 100-200 kb read a 500-900 long word that comes out of a sequencing machine coveragethe average number of reads (or inserts) that cover a position in the target DNA piece shotgun the process of obtaining many reads sequencing from random locations in DNA, to detect overlaps and assemble

  12. AGTAGCACAGACTACGACGAGACGATCGTGCGAGCGACGGCGTAGTGTGCTGTACTGTCGTGTGTGTGTACTCTCCTAGTAGCACAGACTACGACGAGACGATCGTGCGAGCGACGGCGTAGTGTGCTGTACTGTCGTGTGTGTGTACTCTCCT Sequencing and Fragment Assembly 3x109 nucleotides 50% of human DNA is composed of repeats Error! Glued together two distant regions

  13. What can we do about repeats? Two main approaches: • Cluster the reads • Link the reads

  14. What can we do about repeats? Two main approaches: • Cluster the reads • Link the reads

  15. What can we do about repeats? Two main approaches: • Cluster the reads • Link the reads

  16. AGTAGCACAGACTACGACGAGACGATCGTGCGAGCGACGGCGTAGTGTGCTGTACTGTCGTGTGTGTGTACTCTCCTAGTAGCACAGACTACGACGAGACGATCGTGCGAGCGACGGCGTAGTGTGCTGTACTGTCGTGTGTGTGTACTCTCCT A R B D R C Sequencing and Fragment Assembly 3x109 nucleotides ARB, CRD or ARD, CRB ?

  17. AGTAGCACAGACTACGACGAGACGATCGTGCGAGCGACGGCGTAGTGTGCTGTACTGTCGTGTGTGTGTACTCTCCTAGTAGCACAGACTACGACGAGACGATCGTGCGAGCGACGGCGTAGTGTGCTGTACTGTCGTGTGTGTGTACTCTCCT Sequencing and Fragment Assembly 3x109 nucleotides

  18. Strategies for whole-genome sequencing • Hierarchical – Clone-by-clone • Break genome into many long pieces • Map each long piece onto the genome • Sequence each piece with shotgun Example: Yeast, Worm, Human, Rat • Online version of (1) – Walking • Break genome into many long pieces • Start sequencing each piece with shotgun • Construct map as you go Example: Rice genome • Whole genome shotgun One large shotgun pass on the whole genome Example: Drosophila, Human (Celera), Neurospora, Mouse, Rat, Dog

  19. cut many times at random Whole Genome Shotgun Sequencing genome plasmids (2 – 10 Kbp) forward-reverse paired reads known dist cosmids (40 Kbp) ~500 bp ~500 bp

  20. Fragment Assembly(in whole-genome shotgun sequencing)

  21. Fragment Assembly Given N reads… Where N ~ 30 million… We need to use a linear-time algorithm

  22. Steps to Assemble a Genome Some Terminology read a 500-900 long word that comes out of sequencer mate pair a pair of reads from two ends of the same insert fragment contig a contiguous sequence formed by several overlapping reads with no gaps supercontig an ordered and oriented set (scaffold) of contigs, usually by mate pairs consensus sequence derived from the sequene multiple alignment of reads in a contig 1. Find overlapping reads 2. Merge some “good” pairs of reads into longer contigs 3. Link contigs to form supercontigs 4. Derive consensus sequence ..ACGATTACAATAGGTT..

  23. 1. Find Overlapping Reads (read, pos., word, orient.) aaactgcag aactgcagt actgcagta … gtacggatc tacggatct gggcccaaa ggcccaaac gcccaaact … actgcagta ctgcagtac gtacggatc tacggatct acggatcta … ctactacac tactacaca (word, read, orient., pos.) aaactgcag aactgcagt acggatcta actgcagta actgcagta cccaaactg cggatctac ctactacac ctgcagtac ctgcagtac gcccaaact ggcccaaac gggcccaaa gtacggatc gtacggatc tacggatct tacggatct tactacaca aaactgcagtacggatct aaactgcag aactgcagt … gtacggatct tacggatct gggcccaaactgcagtac gggcccaaa ggcccaaac … actgcagta ctgcagtac gtacggatctactacaca gtacggatc tacggatct … ctactacac tactacaca

  24. T GA TACA | || || TAGA TAGT 1. Find Overlapping Reads • Find pairs of reads sharing a k-mer, k ~ 24 • Extend to full alignment – throw away if not >98% similar TAGATTACACAGATTAC ||||||||||||||||| TAGATTACACAGATTAC • Caveat: repeats • A k-mer that occurs N times, causes O(N2) read/read comparisons • ALU k-mers could cause up to 1,000,0002 comparisons • Solution: • Discard all k-mers that occur “too often” • Set cutoff to balance sensitivity/speed tradeoff, according to genome at hand and computing resources available

  25. 1. Find Overlapping Reads Create local multiple alignments from the overlapping reads TAGATTACACAGATTACTGA TAGATTACACAGATTACTGA TAG TTACACAGATTATTGA TAGATTACACAGATTACTGA TAGATTACACAGATTACTGA TAGATTACACAGATTACTGA TAG TTACACAGATTATTGA TAGATTACACAGATTACTGA

  26. 1. Find Overlapping Reads • Correcterrors using multiple alignment TAGATTACACAGATTACTGA TAGATTACACAGATTACTGA TAGATTACACAGATTACTGA TAGATTACACAGATTACTGA TAGATTACACAGATTATTGA TAG-TTACACAGATTATTGA TAGATTACACAGATTACTGA TAGATTACACAGATTACTGA TAG-TTACACAGATTACTGA TAG-TTACACAGATTATTGA insert A correlated errors— probably caused by repeats  disentangle overlaps replace T with C TAGATTACACAGATTACTGA TAGATTACACAGATTACTGA TAGATTACACAGATTACTGA In practice, error correction removes up to 98% of the errors TAG-TTACACAGATTATTGA TAG-TTACACAGATTATTGA

  27. 2. Merge Reads into Contigs • Overlap graph: • Nodes: reads r1…..rn • Edges: overlaps (ri, rj, shift, orientation, score) Reads that come from two regions of the genome (blue and red) that contain the same repeat Note: of course, we don’t know the “color” of these nodes

  28. repeat region 2. Merge Reads into Contigs We want to merge reads up to potential repeat boundaries Unique Contig Overcollapsed Contig

  29. repeat region 2. Merge Reads into Contigs • Ignore non-maximal reads • Merge only maximal reads into contigs

  30. 2. Merge Reads into Contigs • Remove transitively inferable overlaps • If read r overlaps to the right reads r1, r2, and r1 overlaps r2, then (r, r2) can be inferred by (r, r1) and (r1, r2) r r1 r2 r3

  31. 2. Merge Reads into Contigs

  32. 2. Merge Reads into Contigs repeat boundary??? sequencing error • Ignore “hanging” reads, when detecting repeat boundaries b a … b a

  33. Overlap graph after forming contigs Unitigs: Gene Myers, 95

  34. Repeats, errors, and contig lengths • Repeats shorter than read length are easily resolved • Read that spans across a repeat disambiguates order of flanking regions • Repeats with more base pair diffs than sequencing error rate are OK • We throw overlaps between two reads in different copies of the repeat • To make the genome appear less repetitive, try to: • Increase read length • Decrease sequencing error rate Role of error correction: Discards up to 98% of single-letter sequencing errors decreases error rate  decreases effective repeat content  increases contig length

  35. 2. Merge Reads into Contigs • Insert non-maximal reads whenever unambiguous

  36. 3. Link Contigs into Supercontigs Normal density Too dense  Overcollapsed Inconsistent links Overcollapsed?

  37. 3. Link Contigs into Supercontigs Find all links between unique contigs Connect contigs incrementally, if  2 links supercontig (aka scaffold)

  38. 3. Link Contigs into Supercontigs Fill gaps in supercontigs with paths of repeat contigs

  39. 4. Derive Consensus Sequence TAGATTACACAGATTACTGA TTGATGGCGTAA CTA Derive multiple alignment from pairwise read alignments TAGATTACACAGATTACTGACTTGATGGCGTAAACTA TAG TTACACAGATTATTGACTTCATGGCGTAA CTA TAGATTACACAGATTACTGACTTGATGGCGTAA CTA TAGATTACACAGATTACTGACTTGATGGGGTAA CTA TAGATTACACAGATTACTGACTTGATGGCGTAA CTA Derive each consensus base by weighted voting (Alternative: take maximum-quality letter)

  40. Some Assemblers • PHRAP • Early assembler, widely used, good model of read errors • Overlap O(n2)  layout (no mate pairs)  consensus • Celera • First assembler to handle large genomes (fly, human, mouse) • Overlap  layout  consensus • Arachne • Public assembler (mouse, several fungi) • Overlap  layout  consensus • Phusion • Overlap  clustering  PHRAP  assemblage  consensus • Euler • Indexing  Euler graph  layout by picking paths  consensus

  41. Quality of assemblies—mouse Terminology:N50 contig length If we sort contigs from largest to smallest, and start Covering the genome in that order, N50 is the length Of the contig that just covers the 50th percentile. 7.7X sequence coverage

  42. Quality of assemblies—dog 7.5X sequence coverage

  43. History of WGA 1997 • 1982: -virus, 48,502 bp • 1995: h-influenzae, 1 Mbp • 2000: fly, 100 Mbp • 2001 – present • human (3Gbp), mouse (2.5Gbp), rat*, chicken, dog, chimpanzee, several fungal genomes Let’s sequence the human genome with the shotgun strategy That is impossible, and a bad idea anyway Phil Green Gene Myers

  44. Genomes Sequenced • http://www.genome.gov/10002154

More Related