Lecture 2 probability and measurement error part 1
Download
1 / 68

Lecture 2 Probability and Measurement Error, Part 1 - PowerPoint PPT Presentation


  • 63 Views
  • Uploaded on

Lecture 2 Probability and Measurement Error, Part 1. Syllabus.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Lecture 2 Probability and Measurement Error, Part 1' - gaura


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Lecture 2 probability and measurement error part 1

Lecture 2 Probability and Measurement Error, Part 1


Syllabus
Syllabus

Lecture 01 Describing Inverse ProblemsLecture 02 Probability and Measurement Error, Part 1Lecture 03 Probability and Measurement Error, Part 2 Lecture 04 The L2 Norm and Simple Least SquaresLecture 05 A Priori Information and Weighted Least SquaredLecture 06 Resolution and Generalized Inverses

Lecture 07 Backus-Gilbert Inverse and the Trade Off of Resolution and VarianceLecture 08 The Principle of Maximum LikelihoodLecture 09 Inexact TheoriesLecture 10 Nonuniqueness and Localized AveragesLecture 11 Vector Spaces and Singular Value Decomposition

Lecture 12 Equality and Inequality ConstraintsLecture 13 L1 , L∞ Norm Problems and Linear ProgrammingLecture 14 Nonlinear Problems: Grid and Monte Carlo Searches Lecture 15 Nonlinear Problems: Newton’s Method Lecture 16 Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals Lecture 17 Factor AnalysisLecture 18 Varimax Factors, Empircal Orthogonal FunctionsLecture 19 Backus-Gilbert Theory for Continuous Problems; Radon’s ProblemLecture 20 Linear Operators and Their AdjointsLecture 21 Fréchet DerivativesLecture 22 Exemplary Inverse Problems, incl. Filter DesignLecture 23 Exemplary Inverse Problems, incl. Earthquake LocationLecture 24 Exemplary Inverse Problems, incl. Vibrational Problems


Purpose of the lecture
Purpose of the Lecture

review random variables and their probability density functions

introduce correlation and the multivariate Gaussian distribution

relate error propagation to functions of random variables


Part 1 random variables and their probability density functions
Part 1 random variables and their probability density functions


Random variable d
random variable, d

no fixed value until it is realized

d=?

d=?

d=1.04

d=0.98

indeterminate

indeterminate


Random variables have systematics
random variables have systematics

tendency to takes on some values more often than others


N realization of data
Nrealization of data

index, i

10

di

5

0


(A)

(B)

(C)


probability of variable being between d and d+Δd

is p(d) Δd

p(d)

d

Δd


In general probability is the integral
in general probability is the integral

probability that d is between d1 and d2


The probability that d has some value is 100 or unity
the probability that d has some value is 100% or unity

probability that d is between its minimum and maximum bounds, dmin and dmax


How do these two p d f s differ
How do these two p.d.f.’s differ?

p(d)

d

0

5

p(d)

d

0

5


Summarizing a probability density function
Summarizing a probability density function

typical value

“center of the p.d.f.”

amount of scatter around the typical value

“width of the p.d.f.”


Several possibilities for a typical value

p(d)

d

dML

point beneath the peak or “maximum likelihood point” or “mode”


p(d)

50%

50%

d

dmedian

point dividing area in half or “median”


p(d)

d

<d>

balancing point or “mean” or “expectation”


can all be different

  • dML≠ dmedian ≠ <d>


formula for

“mean” or “expected value”

<d>


step 1: usual formula for mean

<d>

d

data

step 2: replace data with its histogram

Ns

<d>

ds

s

histogram

step 3: replace histogram with probability distribution.

Ns

<d>

s

N

p

P(ds)

s

ds

probability distribution



quantifying width an integral:


This function grows away from the typical value: an integral:

  • q(d) = (d-<d>)2

    so the function q(d)p(d) is

    small if most of the area is near <d> ,that is, a narrowp(d)

  • large if most of the area is far from <d> , that is, a wide p(d)

  • so quantify width as the area under q(d)p(d)


(A) an integral:

(B)

(C)

(D)

(E)

(F)


Variance
variance an integral:

mean

width is actually square root of variance, that is, σ



Estimating mean and variance from data1
estimating mean and variance from data an integral:

usual formula for square of “sample standard deviation”

usual formula for “sample mean”


Mablab scripts for mean and variance
MabLab an integral: scripts for mean and variance

from tabulated p.d.f. p

dbar = Dd*sum(d.*p);

q = (d-dbar).^2;

sigma2 = Dd*sum(q.*p);

sigma = sqrt(sigma2);

from realizations of data

dbar = mean(dr);

sigma = std(dr);

sigma2 = sigma^2;


Two important probability density functions
two important probability density functions: an integral:

uniform

Gaussian (or Normal)


Uniform p d f
uniform an integral: p.d.f.

p(d)

box-shaped function

  • 1/(dmax-dmin)

d

dmin

dmax

probability is the same everywhere in the range of possible values


Bell shaped function

Gaussian (or “Normal”) an integral: p.d.f.

bell-shaped function

d

Large probability near the mean, d. Variance is σ2.


Probability between d n

Gaussian an integral: p.d.f.

probability between <d>±nσ


Part 2 correlated errors
Part 2 an integral: correlated errors


Uncorrelated random variables
uncorrelated random variables an integral:

no pattern of between values of one variable and values of another

when d1 is higher than its mean

d2 is higher or lower than its mean

with equal probability


Joint probability density function uncorrelated case
joint probability density function an integral: uncorrelated case

<d2 >

0

10

p

d2

0

0.2

<d1 >

0.0

10

d1


<d an integral: 2 >

0

10

p

d2

0

0.2

<d1 >

0.0

10

d1

no tendency for d2 to be either high or low when d1 is high


In uncorrelated case
in uncorrelated case an integral:

jointp.d.f.is just the product of individualp.d.f.’s


<d an integral: 2 >

10

0

p

d2

0.25

0

<d1 >

10

0.00

2σ1

d1


<d an integral: 2 >

10

0

p

d2

0.25

0

<d1 >

10

0.00

2σ1

d1

tendency for d2to be high when d1 is high


<d an integral: 2 >

10

0

p

d2

0.25

0

θ

2σ1

<d1 >

2σ2

10

0.00

2σ1

d1


<d an integral: 2>

d2

<d1>

2σ1

d1


(A) an integral:

(B)

(C)

<d2>

<d2>

<d2>

d2

d2

d2

<d1>

<d1>

<d1>

d1

d1

d1


Formula for covariance
formula for covariance an integral:

+ positive correlation high d1 high d2

  • - negative correlation high d1 low d2


Joint p d f mean is a vector covariance is a symmetric matrix
joint an integral: p.d.f.mean is a vectorcovariance is a symmetric matrix

diagonal elements: variances

off-diagonal elements: covariances


Estimating covariance from a table d of data
estimating covariance from a table an integral: D of data

Dki: realization k of data-type i

in MatLab, C=cov(D)


Un ivariate p d f formed from joint p d f
un an integral: ivariatep.d.f. formed from joint p.d.f.

p(d) → p(di)

behavior of di irrespective of the other ds

integrate over everything but di


d an integral: 2

p(d1,d2)

p(d1)

integrate over d2

d1

d1

integrate over d1

p(d2)

d2



mean an integral:

covariance matrix


Part 3 functions of random variables
Part 3 an integral: functions of random variables


Functions of random variables
functions of random variables an integral:

data with measurement error

inferences with uncertainty

data analysis process


Functions of random variables1
functions of random variables an integral:

given p(d)

with m=f(d)

what is p(m)?


Univariate p d f use the chair rule
univariate an integral: p.d.f.use the chair rule


Univariate p d f use the chair rule1
univariate an integral: p.d.f.use the chair rule

rule for transforming a univariatep.d.f.


Multivariate p d f
multivariate an integral: p.d.f.

  • Jacobian

  • determinant

  • determinant of matrix with elements ∂di/∂mj


Multivariate p d f1
multivariate an integral: p.d.f.

Jacobian determinant

rule for transforming a multivariate p.d.f.


Simple example
simple example an integral:

data with measurement error

inferences with uncertainty

data analysis process

one datum, d

uniform p.d.f.

0<d<1

one model parameter, m

m = 2d


m=2d an integral: and d=½m

d=0 → m=0

d=1 → m=2

dd/dm = ½

p(d ) =1

  • p(m ) = ½


p(d) an integral:

p(m)

1

1

0

0

d

m

1

1

2

2

0

0


Another simple example
another simple example an integral:

data with measurement error

inferences with uncertainty

data analysis process

one datum, d

uniform p.d.f.

0<d<1

one model parameter, m

m = d2


m=d an integral: 2and d=m½

d=0 → m=0

d=1 → m=1

dd/dm = ½m-½

p(d ) =1

  • p(m ) = ½ m-½


A) an integral:

2.0

p(d)

1.0

0.0

0.2

0.4

0.6

0.8

1.0

d

B)

2.0

p(m)

1.0

0.0

0.2

0.4

0.6

0.8

1.0

m


Multivariate example
multivariate example an integral:

data with measurement error

inferences with uncertainty

data analysis process

2 data, d1 , d2

uniform p.d.f.

0<d<1

for both

  • 2 model parameters, m1andm2

m1 = d1+d2

m2 = d2-d2


(A) an integral:

m2=d1-d1

-1

-1

1

d2

1

m1=d1+d1

d1


m an integral: =Mdwith

and|det(M)|=2

  • d=M-1m so∂d/∂ m = M-1 and J= |det(M-1)|= ½

  • p(d) = 1

  • p(m) = ½


(C) an integral:

(A)

m2=d1-d1

-1

-1

1

-1

p(d1)

1

0

d2

1

1

m1=d1+d1

d1

d1

(B)

(D)

-1

-1

Note that the shape in m is different than the shape in d

1

-1

0.5

p(m1)

m2

0

1

1

2

2

m1

m1


(C) an integral:

(A)

m2=d1-d1

-1

-1

1

-1

p(d1)

1

0

d2

1

1

m1=d1+d1

d1

d1

(B)

(D)

-1

-1

  • The shape is a square with sides of length √2. The amplitude is ½. So the area is ½⨉√2⨉√2 = 1

1

-1

0.5

p(m1)

m2

0

1

1

2

2

m1

m1


(C) an integral:

(A)

m2=d1-d1

-1

-1

1

-1

p(d1)

1

0

d2

1

1

m1=d1+d1

d1

d1

(B)

(D)

-1

-1

1

-1

0.5

p(m1)

m2

0

1

1

2

2

m1

m1


Moral
moral an integral:

p(m) can behavior quite differently from p(d)


ad