1 / 31

Lecture 15: Linkage Analysis VII

Lecture 15: Linkage Analysis VII. Date: 10/14/02 Correction: power calculation Lander-Green Algorithm (Titles on updated or added slides highlighted). Sample Size Calculation. What is the sample size needed in order to achieve a particular statistical power for an estimate?

ralph
Download Presentation

Lecture 15: Linkage Analysis VII

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 15: Linkage Analysis VII Date: 10/14/02 Correction: power calculation Lander-Green Algorithm (Titles on updated or added slides highlighted)

  2. Sample Size Calculation • What is the sample size needed in order to achieve a particular statistical power for an estimate? • We shall assume the relevant statistic is distributed as chi-square statistic.

  3. Sample Size Calculation (cont.) • g is the statistical power • is the critical value to reject H0 with significance level a. • c is the non-centrality parameter, usually the expectation of the log-likelihood ratio test statistic under particular HA and experimental conditions. • df is the degrees of freedom

  4. Sample Size Calculation (cont.)

  5. Modeling • Test your modeling skills. Propose a model for the following family ascertainment situation. • What if you knew that probands were detected independently and with the same probability in each family, except all secondary probands are more easily detected (second, third, etc all to the same degree) than the first proband in a family. • The model formulation and calculation of pr probabilities for families with 3 affected are now posted to the website.

  6. Lander-Green Algorithm • Like the Elston-Stewart algorithm, the Lander-Green algorithm models the pedigree and data as a Hidden Markov Model (HMM), except that the hidden states are the so-called inheritance vectors. • Like the Elston-Stewart algorithm, the Lander-Green algorithm assumes that there is no interference.

  7. LG – (Dis)Advantages • The Lander-Green algorithm is linear in the number of loci and exponential in the number of members in the pedigree. • Recall that the Elston-Stewart algorithm is complementary, linear in the number of members, but exponential in the number of loci. • Simulation methods (MCMC in particular) are used to deal with pedigrees with both high numbers of members and loci.

  8. LG – Inheritance Vector • The inheritance vector is a vector defined for each locus i in the dataset. • It is a binary vector with two components for each non-founder individual in the pedigree. Thus, it is of length 2(n – f). • The entry in the inheritance vector is 0 if the individual’s allele at that position is grandmaternal. If grandpaternal, it is 1. There are 22(n – f) possible inheritance vectors for each locus.

  9. LG – Inheritance Vector (cont) • The inheritance vector holds information about the number of crossovers that occurred to produce each non-founder in the population. • Thus, it is appropriate for estimating recombination fractions as is our goal here with the LG algorithm.

  10. LG – Inheritance Vector Example 1 2 AA aa 3 4 5 6 aa aa aA aA 9 7 8 Aa aa Aa

  11. LG – Simplification by Conditioning • Fortunately, conditional on the inheritance vectors, the genotypes of each offspring are independent. • Of course, conditional on the genotype, the phenotype probabilities are independent. • Thus, we can calculate the probability for each individual in the pedigree independently of the others once we condition on the inheritance vectors.

  12. LG – Hidden States • The inheritance vector constitutes the unknown hidden state for each allele. We must define transition probabilities among the hidden states (from locus-to-locus). • Begin, by considering the transition probability between loci within a single individual, where the inheritance vector is of length 2. • Therefore, the hidden state at each locus is a binary vector of length 2.

  13. LG – Initial State • We must define the initial state of the first marker locus. • Prior to viewing the genotypes, all inheritance vectors are equally likely. • Assume the initial state of the inheritance vector at marker 1 is uniform over {(0,0), (1,0), (0,1), (1,1)}, where we list the maternal status first. In other words, marker 1 has ¼ probability of being in each of these possible states.

  14. LG – Pairwise Transition Probabilities • Because of the assumption of no interference, the transition probabilities from the state at locus i to the state at locus i+1 are given by: where qi is the recombination fraction between locus i and locus i+1.

  15. LG – Switch in Notation • From this point on, assume there are n non-founders (rather than n – f). • The reason for this change is simplification of the equations.

  16. LG – Inheritance Vector Transition Probabilities • The transition probabilities between inheritance vectors defined on full pedigrees with n relevant members, are given by where d(v,w) is the Hamming distance between inheritance vectors v and w, i.e. the number of discordances between them.

  17. LG – Forward Variable

  18. LG – Backward Variable

  19. LG – xi(v,w) transition probability penetrance parameter

  20. LG – Baum’s Lemma • Baum’s Lemma: Let If then

  21. LG – Proof of Baum’s Lemma

  22. LG – Jensen’s Inequality

  23. LG – EM Algorithm • We maximize Q(q,q’) over q’ to maximize the likelihood P(O|q) conditional on the current parameter estimates q. • This may sound familiar. It is the M step of the EM algorithm, and the EM algorithm is how we maximize q over a pedigree. • Details are shown below. Maximization is the difficult step. We show it first.

  24. LG - Maximization Key step: by conditional independence, this probability becomes a product of conditional probabilities.

  25. LG - Maximization

  26. LG – EM Agorithm (M Step)

  27. LG – EM Algorithm (E Step) the usual conditional probabilities needed to calculate expectation sum over all pairs of inheritance vectors

  28. Heterogeneity in Recombination Fraction • Allow for two recombination fraction parameters in each interval. • Allow for one recombination fraction in each interval and a universal constant relating male and female recombination fractions. • Use nested models to test for evidence of sex-based differences.

  29. Model Misspecification • Penetrance parameters, allele frequencies may be incorrectly specified. • The model is robust to misspecification such that the false positive rate for linkage is unaffected by misspecification of these parameters.

  30. Model Misspecification and Ascertainment • When ascertainment is made independent of disease state and marker loci, the method remains robust to misspecification in both. • When ascertainment is made with respect to disease state, then the method is robust to misspecification of the disease parameters.

  31. Effects on Power • Power in two-point linkage analysis is largely unaffected as long as the dominance is specified correctly. • Multipoint linkage analysis is much more sensitive to misspecification of the model. However, there is more information when model parameters are jointly estimated along with position.

More Related