1 / 54

QMDA

QMDA. Review Session. Things you should remember. 1. Probability & Statistics. the Gaussian or normal distribution p(x) = exp{ - (x-x) 2 / 2 s 2 ). variance. expected value. 1  (2 p ) s. x. x+2 s. x-2 s. Properties of the normal distribution. Expectation = Median =

johnna
Download Presentation

QMDA

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. QMDA Review Session

  2. Things you should remember

  3. 1. Probability & Statistics

  4. the Gaussian or normal distributionp(x) = exp{ - (x-x)2 / 2s2 ) variance expected value 1 (2p)s

  5. x x+2s x-2s Properties of the normal distribution Expectation = Median = Mode = x 95% of probability within 2s of the expected value p(x) 95% x

  6. Multivariate DistributionsThe Covariance Matrix, C, is very importantCijthe diagonal elements give the variance of each xisxi2 = Cii

  7. x2 x2 x1 x1 The off-diagonal elemements of C indicate whether pairs of x’s are correlated. E.g.C12 C12>0 positive correlation C12<0 negative correlation x2 x2 x1 x1

  8. the multivariate normal distribution p(x) = (2)-N/2 |Cx|-1/2 exp{ -1/2 (x-x)TCx-1 (x-x) } has expectation x covariance Cx And is normalized to unit area

  9. if y is linearly related to x, y=Mxtheny=Mx(rule for means)Cy = MCxMT(rule for propagating error)These rules work regardless of the distribution of x

  10. 2. Least Squares

  11. Simple Least Squares Linear relationship between data, d, and model, m d = Gm Minimize prediction error E=eTe with e=dobs-Gm mest = [GTG]-1GTd If data are uncorrelated with variance, sd2, then Cm = sd2 [GTG]-1

  12. Least Squares with prior constraints Given uncorrelated with variance, sd2, that satisfy a linear relationship d = Gm And prior information with variance, sm2, that satisfy a linear relationship h = Dm The best estimate for the model parameters, mest, solves d eh G eD m = Previously, we discussed only the special case h=0 With e = sm/sd.

  13. Newton’s Method for Non-Linear Least-Squares Problems Given data that satisfies a non-linear relationship d = g(m) Guess a solution m(k) with k=0 and linearize around it: Dm = m-m(k) and Dd = d-g(m(k)) and Dd=GDm WithGij = gi/mj evaluated at m(k) Then iterate, m(k+1) = m(k) + Dm with Dm=[GTG]-1GTDd hoping for convergence

  14. 3. Boot-straps

  15. Investigate the statistics of y by creating many datasets y’and examining their statisticseach y’ is created throughrandom sampling with replacementof the original dataset y

  16. Example: statistics of the mean of y, given N data Random integers in the range 1-N N original data N resampled data y1 y2 y3 y4 y5 y6 y7 … yN 4 3 7 11 4 1 9 … 6 y’1 y’2 y’3 y’4 y’5 y’6 y’7 … y’N Compute estimate 1 Si y’i N Now repeat a gazillion times and examine the resulting distribution of estimates

  17. 4. Interpolation and Splines

  18. linear splines in this interval y(x) = yi + (yi+1-yi)(x-xi)/(xi+1-xi) y yi+1 yi 1st derivative discontinuous here x xi xi+1

  19. cubic splines y cubic a+bx+cx2+dx3 in this interval yi+1 a different cubic in this interval yi 1st and 2nd derivative continuous here x xi xi+1

  20. 5. Hypothesis Testing

  21. The Null Hypothesis always a variant of this theme: the results of an experiment differs from the expected value only because of random variation

  22. Test of Significance of Results say to 95% significance The Null Hypothesis would generate the observed result less than 5% of the time

  23. Four important distributions Normal distribution Chi-squared distribution Student’s t-distribution F-distribution Distribution of xi Distribution of c2 = Si=1Nxi2 Distribution of t = x0 / { N-1Si=1Nxi2 } Distribution of F = { N-1Si=1N xi2} / { M-1Si=1M xN+i2 }

  24. 5 tests mobs = mprior when mprior andsprior are known normal distribution sobs = sprior when mprior andsprior are known chi-squared distribution mobs = mprior when mprior is known but sprior is unknown t distribution s1obs = s2obs when m1prior andm2prior are known F distribution m1obs = m2obs when s1prior and s2prior are unknown modified t distribution

  25. 6. filters

  26. Filtering operation g(t)=f(t)*h(t)“convolution” g(t) = -tf(t-t) h(t) dt  gk = Dt Sp=-k fk-p hp g(t) = 0f(t) h(t-t) dt  gk = Dt Sp=0 fp hk-p or alternatively

  27. How to do convolution by hand x=[x0, x1, x2, x3, x4, …]T and y=[y0, y1, y2, y3, y4, …]T Reverse on time-series, line them up as shown, and multiply rows. This is first element of x*y x0, x1, x2, x3, x4, …  … y4, y3, y2, y1, y0 x0y0 [x*y]1= Then slide, multiply rows and add to get the second element of x*y x0, x1, x2, x3, x4, …   … y4, y3, y2, y1, y0 x0y1+x1y0 [x*y]2= And etc …

  28. g0 g1 … gN f0 0 0 0 0 0 f1 f0 0 0 0 0 … fN … f3 f2 f1 f0 h0 h1 … hN = Dt g0 g1 … gN h0 0 0 0 0 0 h1 h0 0 0 0 0 … hN … h3 h2 h1 h0 f0 f1 … fN = Dt Matrix formulations of g(t)=f(t)*h(t) g = Fh and g = H f

  29. g0 g1 … gN h0 0 0 0 0 0 h1 h0 0 0 0 0 … hN … h3 h2 h1 h0 f0 f1 … fN = Dt g = Hf Least-squares equation [HTH] f = HTg X(0) X(1) X(2) … X(N) A(0) A(1) A(2) … A(1) A(0) A(1) … A(2) A(1) A(0) … … A(N) A(N-1) A(N-2) … f0 f1 … fN = Cross-correlation of h and g Autocorrelation of h

  30. Ai and Xi Auto-correlation of a time-series, T(t) A(t) = -+ T(t) T(t-t)dt Ai = Sj Tj Tj-i Cross-correlation of two time-series T(1)(t) and T(2)(t) X(t) = -+ T(1)(t) T(2)(t-t)dt Xi = Sj T(1)j T(2)j-i

  31. 7. fourier transforms and spectra

  32. Integral transforms: C(w) = -+T(t) exp(-iwt) dt T(t) = (1/2p) -+C(w) exp(iwt) dw Discrete transforms (DFT) Ck = Sn=0N-1 Tn exp(-2pikn/N ) with k=0, …, N-1 Tn = N-1Sk=0N-1 Ck exp(+2pikn/N ) with n=0, …, N-1 Frequency step: DwDt = 2p/N Maximum (Nyquist) Frequency wmax = 1/ (2Dt)

  33. Aliasing and cyclicityin a digital world wn+N = wnandsince time and frequency play symmetrical roles in exp(-iwt) tk+N = tk

  34. One FFT that you should know: FFT of a spike at t=0 is a constant C(w) = -+d(t) exp(-iwt) dt = exp(0) = 1

  35. Error Estimates for the DFT Assume uncorrelated, normally-distributed data, dn=Tn, with variance sd2 The matrix G in Gm=d is Gnk=N-1 exp(+2pikn/N ) The problem Gm=d is linear, so the unknowns, mk=Ck, (the coefficients of the complex exponentials) are also normally-distributed. Since exponentials are orthogonal, GHG=N-1I is diagonal and Cm= sd2 [GHG]-1 = N-1sd2Iis diagonal, too Apportioning variance equally between real and imaginary parts of Cm, each has variance s2= N-1sd2/2. The spectrum sm2= Crm2+ Cim2 is the sum of two uncorrelated, normally distributed random variables and is thus c22-distributed. The 95% value of c22 is about 5.9, so that to be significant, a peak must exceed 5.9N-1sd2/2

  36. Convolution Theorem transform[ f(t)*g(t) ] = transform[g(t)]  transform[f(t)]

  37. Power spectrum of a stationary time-series T(t) = stationary time series C(w) = -T/2+T/2T(t) exp(-iwt) dt S(w) = limT T-1 |C(w)|2 S(w) is called the power spectral density, the spectrum normalized by the length of the time series.

  38. Relationship of power spectral density to DFT To compute the Fourier transform, C(w), you multiply the DFT coefficients, Ck, by Dt. So to get power spectal density T-1 |C(w)|2 = (NDt)-1 |Dt Ck|2 = (Dt/N) |Ck|2 You multiply the DFT spectrum, |Ck|2, by Dt/N.

  39. Windowed Timeseries Fourier transform of long time-series convolved with the Fourier Transform of the windowing function is Fouier transform of windowed time-series

  40. Window Functions Boxcar its Fourier transform is a sinc function which has a narrow central peak but large side lobes Hanning (Cosine) taper its Fourier transform has a somewhat wider central peak but now side lobes

  41. 8. EOF’s and factor analysis

  42. SamplesNM Representation of samples as a linear mixing of factors S = C F (A in s1) (B in s1) (C in s1) (A in s2) (B in s2) (C in s2) (A in s3) (B in s3) (C in s3) … (A in sN) (B in sN) (C in sN) (f1 in s1) (f2 in s1) (f3 in s1) (f1 in s2) (f2 in s2) (f3 in s2) (f1 in s3) (f2 in s3) (f3 in s3) … (f1 in sN) (f2 in sN) (f3 in sN) (A in f1) (B in f1) (C in f1) (A in f2) (B in f2) (C in f2) (A in f3) (B in f3) (C in f3) = FactorsMM CoefficientsNM

  43. SamplesNM data approximated with only most important factorsp most important factors = those with the biggest coefficients S  C’ F’ (A in s1) (B in s1) (C in s1) (A in s2) (B in s2) (C in s2) (A in s3) (B in s3) (C in s3) … (A in sN) (B in sN) (C in sN) (f1 in s1) (f2 in s1) (f1 in s2) (f2 in s2) (f1 in s3) (f2 in s3) … (f1 in sN) (f2 in sN) (A in f1) (B in f1) (C in f1) (A in f2) (B in f2) (C in f2) = ignore f3 ignore f3 selectedcoefficientsNp selectedfactors pM

  44. Singular Value Decomposition (SVD)Any NM matrix S and be written as the product of three matricesS = ULVTwhere U is NN and satisfies UTU = UUTV is MM and satisfies VTV = VVTandL is an NM diagonal matrix of singular values, li

  45. SVD decomposition of SS = ULVT write asS = ULVT = [UL] [VT] = C FSo the coefficients are C = ULand the factors areF = VTThe factors with the biggest li’s are the most important

  46. Transformations of Factors If you chose the p most important factors, they define both a subspace in which the samples must lie, and a set of coordinate axes of that subspace. The choice of axes is not unique, and could be changed through a transformation, T Fnew = TFold A requirement is that T-1 exists, else Fnew will not span the same subspace as Fold S = CF = CIF = (CT-1) (T F)= Cnew Fnew So you could try to implement the desirable factors by designing an appropriate transformation matrix, T

  47. 9. Metropolis Algorithm and Simulated Annealing

  48. Metropolis Algorithma method to generate a vector x of realizations of the distribution p(x)

  49. The process is iterativestart with an x, say x(i)then randomly generate another x in its neighborhood, say x(i+1),using a distribution Q(x(i+1)|x(i))then test whether you will accept the new x(i+1)if it passes, you append x(i+1) to the vector x that you are accumulatingif it fails, then you append x(i)

  50. p(x(i+1)) Q(x(i)|x(i+1)) a = p(x(i)) Q(x(i+1)|x(i)) a reasonable choice for Q(x(i+1)|x(i)) normal distribution with mean=x(i) and sx2 that quantifies the sense of neighborhoodThe acceptance test is as followsfirst compute the quantify: If a>1 always accept x(i+1)If a<1 accept x(i+1) with a probability of a and accept x(i) with a probability of 1-a

More Related