1 / 17

Tutorial 5, STAT1301 Fall 2010, 26OCT2010 , MB103@HKU By Joseph Dong

N umerical Characteristics of a Random Variable G enerating Functions S trictly Monotonic Transformation of a Random Variable E xpectation as Integration M arkov’s Inequality. Tutorial 5, STAT1301 Fall 2010, 26OCT2010 , MB103@HKU By Joseph Dong. Recall: What is a Random Variable?.

dolf
Download Presentation

Tutorial 5, STAT1301 Fall 2010, 26OCT2010 , MB103@HKU By Joseph Dong

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Numerical Characteristics of a Random VariableGenerating FunctionsStrictly Monotonic Transformation of a Random VariableExpectation as IntegrationMarkov’s Inequality Tutorial 5, STAT1301 Fall 2010, 26OCT2010, MB103@HKUBy Joseph Dong

  2. Recall: What is a Random Variable? A Random Variable is a function defined on a sample space. The sample space contains randomness. The state space is accordingly random. The Random Variable itself is deterministic.

  3. Recall: What we have done about RV? • We have defined the Random Variable as a function (with a special restriction we don’t want to discuss in this course) from a given sate spaceto a sample space (the total set of outcomes from a random experiment) , usually a subset of . • In symbols: • The sample space is the platform where we adopt the notion “variable”.

  4. Recall: What we have done about RV? • We have done the probability distribution of a random variable. • This is the lawgoverning the random variable’s dance in sample space. • Two equivalent way of describing the law • By probability measure on the sample space: (takes in a set as argument) • By listing the probability measure for all atoms of the sample space • This is equivalent to defining PDF or PMF, or a general probability function • By distribution function (takes in a number as argument) • The distribution function is never decreasing • , • The distribution function is right continuous

  5. Numerical Characteristics of a Random Variable and Related Topics • Workplace = a numeral sample space (subset of ) = or • Expectation • Law Of The UnconsciousStatistician: • Moments = Expectation of positive integer powers: • Variance = 2nd order central moment: • Compute Moments using Moment Generating Function • Markov & Chebyshev Inequalities , • Strictly Monotonic Transformation of an R.V. & an invariant differential • When is strictly increasing, then

  6. Linearity of Expectation where can be . Simple cases:

  7. Technical Exercises • Handout Problem 1, 2, and 3. • This is the level that you have already mastered before yesterday’s midterm

  8. A Closer Look at Expectation • Expectation is a generalized integral. • Let’s forget about probability theory for a few minutes and go back to calculus. • Usually, we always use a homogeneous horizontal axis for integration. The density everywhere is the same. Such as in • But we can generalize by allowing the density to vary from place to place on the horizontal axis. • To take care of the density, we introduce a density function into the integral as: (Of course the integral will now change value, except everywhere.)

  9. Center of Mass and Expectation • For now let’s forget about the curve but focus on the x-axis • If we treat the segment on the horizontal axis as a massed segment with linear mass density, we can now compute the coordinate of its center of mass, , according to the formula: • One more step: • Note that can be regarded as a normalizing constant and the whole thing could be some real probability density! • Now suppose the x-axis is the state space of some random variable , and is actually , the probability density, then and are the same thing—both conceptually and technically.

  10. Exercises: Handout Problem 4 & 5

  11. Law of the Unconscious Statistician • We go one step forward to find the expectation of any function of such as , , etc., that is • Go back to the previous unresolved integration , and, without lost of generality, assume the density here is a probabilistic one. • Obs1: If two r.v.’s share the same sample space and the same distribution, then they must have the same expectation. • Therefore • Obs2: If two values, say and , are mapped to the same value by , that is, if , then • Therefore

  12. A New Level of Understanding • Now we understand the meaning of the new integral where is a probability density on the x-axis, is the expectation of : • Expectation is an Integration of the general kind. • They are unconscious about the fact that as a random variable has a different sample space than has. Hence the definition of or more explicitly written as should be and it takes some reasoning to establish the equality of this integral with the one used in Lotus.

  13. Markov’s Inequality Caution: Markov’s Inequality only works for non-negative r.v..

  14. Generating Function • Generating Function is a general math technique. • Whenever you have a function whose value set (range) is a countable set, you can embed these values in a power series as: where is the range of the function. In specific cases, the power series will converge(sum) to a compact form, but it will still be a function of . • Question: How to get back the ’s when you are directly given ? • One widely used way is to differentiate with respect to , multiple times, and evaluate the derivative at , and divide by a constant. • For example, you want to get back , the procedure is • Often, to remove the division step, we adopt the form

  15. Moment Generating Function • Recall: Moment of a random variable where is a non-negative integer (). • If we regard is a function whose value is indexed by , then the value set is a countable set: • Then we can embed all the moments in a generating function/power series known as Moment Generating Function:

  16. Strictly Monotonic Transformation of an R.V. • Strictly Monotonic Transformation(Function) • Strictly Increasing Transformation • Strictly Decreasing Transformation • Consider a strictly increasing function . For simplicity, use to denote , and hence to denote . The following equality between the two probability differentials must hold: • Reason: • This is equivalent to claiming • But , since is strictly monotonic, therefore the event is the exactly the same one as . • For strictly decreasing functions, absolute values are needed.

  17. Consequence of • Caution: Always remember this equality holds under the strict monotonic transformation condition. • Consequence: • Caution: Absolute value here are always needed for some very mysterious reason in the general theory of calculus (Consult Loomis’s Advanced Calculus if you are interested). • This is the standard way of find the (strictly monotonically) transformed density function.

More Related