Xi estimation of linear model parameters
Download
1 / 130

XI. Estimation of linear model parameters - PowerPoint PPT Presentation


  • 126 Views
  • Uploaded on

XI. Estimation of linear model parameters. XI.A Linear models for designed experiments XI.B Least squares estimation of the expectation parameters in simple linear models XI.C Generalized least squares (GLS) estimation of the expectation parameters in general linear models

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' XI. Estimation of linear model parameters' - hayley-mercado


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Xi estimation of linear model parameters
XI. Estimation of linear model parameters

XI.ALinear models for designed experiments

XI.B Least squares estimation of the expectation parameters in simple linear models

XI.C Generalized least squares (GLS) estimation of the expectation parameters in general linear models

XI.D Maximum likelihood estimation of the expectation parameters

XI.E Estimating the variance

Statistical Modelling Chapter XI


Xi a linear models for designed experiments

  • where

    • y is an n1 vector of expected valuesfor the observations,

    • Y is an n1 vector of random variables for the observations,

    • X is an nq matrix with nq,

    • qis a q1 vector of unknown parameters,

    • V is an nn matrix,

    • is the variance component for the ith term in the variance model and

    • Si is the summation matrix for the ith term in the variance model.

XI.A Linear models for designed experiments

  • In the analysis of experiments, the models that we have considered are all of the form

  • Note that generally the X matrices contain indicator variables, each indicating the observations that received a particular level of a (generalized) factor.

Statistical Modelling Chapter XI


Summation matrices
Summation matrices

  • Si is the matrix of 1s and 0s that forms the sums for the generalized factor corresponding to the ith term.

  • Now,

  • where

    • Mi is the mean operator for the ith term in the variance model,

    • n is the number of observational units and

    • fi is the number of levels of the generalized factor for this term.

  • For equally-replicated, generalized factors n/fi=gi where gi is the no. of repeats of each of its levels.

  • Consequently, the Ss could be replaced with Ms.

Statistical Modelling Chapter XI


For example variation model with blocks random for rcbd with b 3 and t 4
For example, variation model with Blocks random for RCBD with b= 3 and t= 4

Thus the model is of the form:

Statistical Modelling Chapter XI


Classes of linear models
Classes of linear models with

  • Firstly, linear models split into

    • expectation model is of full rank

    • expectation model is of less than full rank.

  • The rank of an expectation model is given by the following definitions — 2nd is equivalent to that given in chapter I.

  • Definition XI.1: The rank of an expectation model is equal to the rank of its X matrix.

  • Definition XI.2: The rank of an n q matrix X, with n q, is the number of linearly independent columns in X.

  • Such a matrix is of full rank when its rank equal q and is of less than full rank when its rank is less than q.

  • It will be less than full rank if the linear combination of some columns is equal to the same linear combination as some other columns.

Statistical Modelling Chapter XI


Simple vs general models
Simple vs general models with

  • Secondly, linear models split into those for which the variation model is

    • simple in that it is of the form s2In

    • more general form that involves more than one term.

  • For general linear models, S matrices in variation model can be written as direct products of I and J matrices.

Definition XI.3: The direct-product operator is denoted  and, if Ar and Bc are square matrices of order r and c, respectively, then

Statistical Modelling Chapter XI


Expressions for s matrices in general models
Expressions for with S matrices in general models

  • When all the variation terms correspond to only unrandomized, and not randomized, generalized factors, use the following rule to derive these expressions.

  • Rule XI.1: The Si for the ith generalized factor from the unrandomized structure that involves s factors, is the direct product of s matrices, provided the s factors are arranged in standard order.

  • Taking the s factors in the sequence specified by the standard order,

    • for each factor in the ith generalized factor include an I matrix in the direct product and

    • J matrices for those that are not.

    • The order of a matrix in the direct product is equal to the number of levels of the corresponding factor.

Statistical Modelling Chapter XI


Example iv 1 penicillin yield continued
Example IV.1 Penicillin yield with (continued)

  • This experiment was an RCBD and its experimental structure is

  • Two possible models depending on whether Blends random or fixed.

  • For Blends fixed,

    • and taking the observations to be ordered in standard order for Blends then Flasks,

    • the only random term is BlendsFlasks and the maximal expectation and variation models are

Statistical Modelling Chapter XI


Example iv 1 penicillin yield continued1

Example IV.1 Penicillin yield (continued)

  • Suppose observations are

    • in standard order for Blend then Treatment;

    • with Flask renumbered within Blend to correspond to Treatment as in the prerandomization layout.

  • Analysis for prerandomization and randomized layouts must be same as prerandomization layout is one of the possible randomized arrangements.

  • Can write the Xs as direct products:

  • XB=I5 14

  • XT=15 I4

Statistical Modelling Chapter XI


Example iv 1 penicillin yield continued2
Example IV.1 Penicillin yield with (continued)

  • Clearly, sum of indicator variables (columns) in XB and sum of those in XT both equal 120

  • So the expectation model involving the X matrix [XBXT] is of less than full rank — its rank is b + t-1.

  • The linear model is a simple linear model because the variance matrix is of the form s2In.

  • That is, the observations are assumed to have the same variance and to be uncorrelated.

Statistical Modelling Chapter XI


Example iv 1 penicillin yield continued3
Example IV.1 Penicillin yield with (continued)

  • On the other hand, for Blends random,

    • the random terms are Blends and BlendsFlasks and

    • the maximal expectation and variation models are

  • In this case the expectation is of full rank as XT consists of t linearly independent columns.

  • However, it is a general, not a simple, linear model as the variance matrix is not of the form s2In.

Statistical Modelling Chapter XI


Example iv 1 penicillin yield continued4

Example IV.1 Penicillin yield (continued)

  • The form is illustrated for b = 3, t = 4 below, the extension to b = 5 being obvious.

  • Model allows for covariance, and hence correlation, between flasks from the same blend.

Statistical Modelling Chapter XI


Example iv 1 penicillin yield continued5

  • The direct product for is I5 J4 and it involves an I and a J because the factor Blends is in this term but Flasks is not.

Example IV.1 Penicillin yield (continued)

  • The direct-product expressions for both Vs conform to rule XI.1.

  • Recall, the units are indexed with Blocks then Flasks.

Statistical Modelling Chapter XI


Example ix 1 production rate experiment revisited
Example IX.1 Production rate experiment diagonal 4 (revisited)

  • Layout:

  • The experimental structure for this experiment was:

Statistical Modelling Chapter XI


Example ix 1 production rate experiment revisited1
Example IX.1 Production rate experiment diagonal 4 (revisited)

  • Suppose all the unrandomized factors are random

  • So the maximal expectation and variation models are, symbolically

    • y=E[Y] = MethodsSources

    • var[Y] = Factories + FactoriesAreas + FactoriesAreasParts.

  • That is

Statistical Modelling Chapter XI


Example ix 1 production rate experiment revisited2

  • In this matrix diagonal 4

    • will occur down the diagonal,

    • will occur in 3  3 blocks down the diagonal and

    • will occur in 9  9 blocks down the diagonal.

Example IX.1 Production rate experiment (revisited)

  • Now if the observations are arranged in standard order with respect to the factors Factories, Areas and then Parts, we can use rule XI.1 to obtain the following direct-product expression for V.

  • So this model allows for

    • covariance between parts from the same area and

    • a different covariance between areas from the same factory.

Statistical Modelling Chapter XI


Example ix 1 production rate experiment revisited3
Example IX.1 Production rate experiment diagonal 4 (revisited)

  • Actually, in ch. IX, Factories was assumed fixed and so

  • So the rules still apply as terms in V only involve unrandomized factors

  • This model only allows for

    • covariance between parts from the same area.

Statistical Modelling Chapter XI


Xi b least squares estimation of the expectation parameters in simple linear models
XI.B Least squares estimation of the expectation parameters in simple linear models

  • Want to estimate the parameters q in the expectation model by establishing their estimators.

  • There are several different methods for doing this.

  • A common method is the method of least squares, as in the following definition originally given in chapter I.

Statistical Modelling Chapter XI


A ordinary least squares estimators for full rank expectation models
a) Ordinary least squares estimators for full-rank expectation models

  • Definition XI.4: Let Y=Xq + e where

    • X is an nq matrix of rank mq with nq,

    • qis a q1 vector of unknown parameters,

    • e is an n1 vector of errors with mean 0 and variance s2In.

      The least ordinary least squares (OLS) estimator of q is the value of q that minimizes

  • Note applies to the less-than-full rank case

Statistical Modelling Chapter XI


Least squares estimators of q
Least squares expectation models estimators of q

  • Theorem XI.1: Let Y=Xq + e where

    • Y is an n1 vector of random variables for the observations,

    • X is an nq matrix of full rank with nq,

    • qis a q1 vector of unknown parameters,

    • e is an n1 vector of errors with mean 0 and variance s2In.

      The ordinary least squares estimator of q is given by

  • (The ‘^’ denotes estimator)

  • Proof: Need to set differential of e'e= (Y – Xq)'(Y – Xq) to zero (see notes for details);

  • Yields the normal equations whose solution is given above.

Statistical Modelling Chapter XI


Estimator of y
Estimator of expectation models y

  • Using definition I.7 (expected values), an estimator of the expected values is

  • Now for experiments

    • the X matrix consists of indicator variables and

    • the maximal expectation model is generally the sum of terms each of which is an X matrix for a (generalized) factor and an associated parameter vector.

  • In cases where maximal expectation model involves a single term corresponding to just one generalized factor,

    • the model is necessarily of full rank and

    • the OLS estimator of its parameters is particularly simple.

Statistical Modelling Chapter XI


Example vii 4 2 2 factorial experiment
Example VII.4 2 expectation models 2 Factorial experiment

  • To aid in proofs coming up consider a two-factor factorial experiment (as in chapter 7, Factorial experiments).

  • Suppose there are the two factors A and B with a=b= 2, r= 3 and n= 12.

  • Assume that data is ordered for A then B then replicates.

  • The maximal model involves a single term corresponding to the generalized factor AB — it is

That is, each column of XAB indicates which observations received one of the combinations of the levels of A and B.

Statistical Modelling Chapter XI


Example vii 4 2 2 factorial experiment continued

Example VII.4 22 Factorial experiment (continued)

  • Now, from theorem XI.1 (OLS estimator), the estimator of the expectation parameters is given by

Statistical Modelling Chapter XI


Example vii 4 2 2 factorial experiment continued obtaining inverse
Example VII.4 2 expectation models 2 Factorial experiment (continued) — obtaining inverse

  • Notice in the above that

    • rows of XAB are columns of XAB

    • each element in product is of the form

  • where xi and xj are the n-vectors for the ith and jth columns of X, respectively.

  • i.e. sum of pairwise products of elements of ith and jth columns of X.

Statistical Modelling Chapter XI


Example vii 4 2 2 factorial experiment continued obtaining product
Example VII.4 2 expectation models 2 Factorial experiment (continued) — obtaining product

  • where Yij. is the sum over the replicates for the ijth combination of A and B.

Statistical Modelling Chapter XI


Example vii 4 2 2 factorial experiment continued obtaining parameter estimates
Example VII.4 2 expectation models 2 Factorial experiment (continued) — obtaining parameter estimates

  • where is the mean over the replicates for the ijth combination of A and B.

Statistical Modelling Chapter XI


Example vii 4 2 2 factorial experiment continued obtaining estimator of expected values
Example VII.4 2 expectation models 2 Factorial experiment (continued) — obtaining estimator of expected values

  • where is the mean operator producing the n-vector of means for the combinations of A and B.

Statistical Modelling Chapter XI


Proofs for expected values for a single term model
Proofs for expected values for a single-term model expectation models

  • First prove that model is of full rank and then derive the OLS estimator.

  • Lemma XI.1: Let X be an n f matrix with n≥ f.

    Then rank(X) =rank(XX).

    Proof: not given.

Statistical Modelling Chapter XI


Proofs for expected values for a single term model continued
Proofs for expected values for a single-term model (continued)

  • Lemma XI.2: Let Y=Xq + e where X is an n f matrix corresponding to a generalized factor, F say, with f levels and n≥ f.

    Then, the rank of X is f so that X is of full rank.

  • Proof: Lemma XI.1 states that rank(X) =rank(XX) and it is easier to derive the rank of XX.

  • To do this we need to establish a general expression for XX which we do by considering the form of X.

Statistical Modelling Chapter XI


Proofs for expected values for a single term model continued1
Proofs for expected values for a single-term model (continued)

  • Now

    • n rows of X correspond to observational units in the experiment and

    • f columns to the levels of the generalized factor.

  • All the elements of X are either 0 or 1

    • In the ith column, a 1 occurs for those units with the ith level of the generalized factor

      so that, if the ith level of the generalized factor is replicated gi times, there will be gi 1s in the ith column.

    • In each row of X there will be a single 1 as only one level of the generalized factor can be observed with each unit

      if it is the ith level of the generalized factor that occurs on the unit, then the 1 will occur in the ith column of X.

Statistical Modelling Chapter XI


Proofs for expected values for a single term model continued2
Proofs for expected values for a single-term model (continued)

  • Considering the f f matrix XX

    • its ijth element is the product of the ith column, as a row vector, with the jth column as given in the following expression (see example above):

  • When i=j, the 2 columns are same and the products are either of two 1s or two 0s, so

    • product is 1 for units that have ith level of generalized factor and

    • sum will be gi.

  • When i≠j, have product of vector for ith column, as a row vector, with that for jth column.

    • As each unit can receive only one level for the generalized factor, it cannot receive both ith and jth levels and the products are either two 0s or a 1 and a 0.

    • Consequently, sum is zero.

Statistical Modelling Chapter XI


Proofs for expected values for a single term model continued3
Proofs for expected values for a single-term model (continued)

  • It is concluded that XX is a f f diagonal matrix with diagonal elements gi.

  • Hence, in this case, the rank(X) =rank(XX) =f and so X is of full rank.

Statistical Modelling Chapter XI


Proofs for expected values for a single term model continued4

The ordinary least squares estimator of (continued)q is denoted by and is given by

double over-dots indicate not n-vector, but f-vector.

where is the f-vector of means for the levels of the generalized factor F.

Proofs for expected values for a single-term model (continued)

  • Theorem XI.2: Let Y=Xq + e

    where

    • X is an n f matrix corresponding to a generalized factor, F say, with f levels and n≥ f.

    • qis a f 1 vector of unknown parameters,

    • e is an n1 vector of errors with mean 0 and variance s2In.

Statistical Modelling Chapter XI


Proofs for expected values for a single term model continued5

  • Proof (continued): From theorem XI.1(OLS estimator) we have that the OLS estimator is

  • Consequently, (XX)-1 is a diagonal matrix with diagonal elements

Proofs for expected values for a single-term model (continued)

  • Need expressions for XX and XY in this special case .

  • In the proof of lemma XI.2 it was shown that XX is a f f diagonal matrix with diagonal elements gi.

Statistical Modelling Chapter XI


Proofs for expected values for a single term model continued6
Proofs for expected values for a single-term model (continued)

  • Now XY is an f-vector whose ith element is the total for the ith level of the generalized factor of the elements of Y.

    • This can be seen when it is realized that the ith element of XY is the product of the ith column of X with Y.

    • Clearly, the result will be the sum of the elements of Y corresponding to elements of the ith column of X that are one:

    • those units with the ith level of the generalized factor.

  • Finally, taking the product of (XX)-1 with XY divides each total by its replication forming the f-vector of means as stated.

Statistical Modelling Chapter XI


Proofs for expected values for a single term model continued7
Proofs for expected values for a single-term model (continued)

  • Corollary XI.1: With the model for Y as in theorem XI.2, the estimator of the expected values are given by

  • where

    • is the n-vector an element of which is the mean for the level of the generalized factor F for the corresponding unit and

    • MF is the mean operator that replaces each observation in Y with the mean for the corresponding level of the generalized factor F.

Statistical Modelling Chapter XI


Proofs for expected values for a single term model continued8

  • Proof (continued): Now and, as stated in proof of lemma XI.2 (rank X), each row of X contains a single 1

    • if it is the ith level of the generalized factor that occurs on the unit, then the 1 will occur in the ith column of X.

  • So corresponding element of will be the ith element of which is the mean for the ith level of the generalized factor F and

  • To show that , we have from theorem XI.1 (OLS estimator) that

  • Letting yields

  • Now, given also, MF must be the mean operator that replaces each observation in Y with the mean for the corresponding level of the F.

Proofs for expected values for a single-term model (continued)

Statistical Modelling Chapter XI


Example xi 1 rat experiment
Example XI.1 Rat experiment (continued)

  • See lecture notes for this example that involves a single, original factor whose levels are unequally replicated.

Statistical Modelling Chapter XI


B ordinary least squares estimators for less than full rank expectation models
b) Ordinary least squares estimators for less-than-full-rank expectation models

  • The model for the expectation is still of the form E[Y]=Xq but in this case the rank of X is less than the number of columns.

  • Now rank(X) =rank(XX) =m < q and so there is no inverse of XX to use in solving the normal equations

  • Our aim is to show that, in spite of this and the difficulties that ensue, the estimators are functions of means.

  • For example, the maximal, less-than-full-rank, expectation model for an RCBD is:

  • yB+T=E[Y] =XBb + XTt

  • Will show, with some effort, the estimator of the expected values is:

Statistical Modelling Chapter XI


Less than full rank vs full rank model

  • In the full rank model expectation modelsXX is nonsingular and so (XX)-1 exists and the normal equations have the one solution:

  • In a less-than-full-rank model there are infinitely many solutions to the normal equations.

Less-than-full-rank vs full-rank model

  • In the full rank model it is assumed that the parameters specified in the model are unique.

    There exists exactly one set of real numbers {q1, q2, …, qq} that describes the system.

    In the less-than-full-rank model there are infinitely many sets of real numbers that describe the system.

    For our example, there are infinitely many choices for {b1, …, bb, t1, …, tt}. The model parameters are said to be nonidentifiable.

  • In the full rank model all linear functions of {q1, q2, …, qq} can be estimated unbiasedly.

  • In the less-than-full-rank model some functions can while others cannot. Want to identify functions with invariant estimated values.

Statistical Modelling Chapter XI


Example xi 2 the rcbd
Example XI.2 The RCBD expectation models

  • Simplest model that is less than full rank is the maximal expectation model for the RCBD,

    E[Y] = Blocks + Treats.

  • Generally, RCBD involves b blocks with t treatments so that there are n=bt observations.

  • The maximal model used for an RCBD, in matrix terms, is:

    yB+T=E[Y] =XBb + XTt and var[Y] =s2In

  • However, as previously mentioned, the model is not of full rank because the sums of the columns of both XB and XT are equal to 1n.

  • Consequently, for X= [XBXT], rank(X) =rank(XX) =b + t- 1

Statistical Modelling Chapter XI


Illustration for b t 2 that the parameters are nonidentifiable
Illustration for expectation modelsb=t=2 that the parameters are nonidentifiable

  • Suppose following information is known about the parameters:

  • Then the parameters are not identifiable for:

    • if b1= 5, then t1= 5, t2= 10 and b2= 7;

    • if b1= 6, then t1= 4, t2= 9 and b2= 8.

  • Clearly can pick a value for any one of the parameters and then find values of the others that satisfy the above equations.

  • Infinitely many possible values for the parameters.

  • However, no matter what values are taken for b1, b2, t1 and t2, the value of b2-b1= 2 and of t2 - t1= 5;

    • these functions are invariant.

Statistical Modelling Chapter XI


Coping with less than full rank models
Coping with less-than-full-rank models expectation models

  • Extend estimation theory presented in section a), Ordinary least squares estimators for full-rank expectation models.

  • Involves obtaining solutions to the normal equations using generalized inverses.

Statistical Modelling Chapter XI


Introduction to generalized inverses
Introduction to generalized inverses expectation models

  • Suppose that we have a system of n linear equations in q unknowns such as Ax=y

  • where

    • A is an nq matrix of real numbers,

    • x is a q-vector of unknowns and

    • y is a n-vector of real numbers.

  • There are 3 possibilities:

    • the system is inconsistent and has no solution;

    • the system is consistent and has exactly one solution;

    • the system is consistent and has many solutions.

Statistical Modelling Chapter XI


Consistent vs inconsistent equations

  • Theorem XI.3 expectation models: Let E[Y]=Xq be a linear model for the expectation. Then the system of normal equations

  • is consistent.

  • Proof: not given.

Consistent vs inconsistent equations

  • Consider the following equations:

    x1 + 2x2= 7

    3x1 + 6x2= 21

  • They are consistent in that the solution of one will also satisfy the second because the second equation is just 3 times the first.

  • However, the following equations are inconsistent in that it is impossible to satisfy both at the same time:

    x1 + 2x2= 7

    3x1 + 6x2= 24

  • Generally, to be consistent, any linear relations on the left of the equations must also be satisfied by the righthand side of the equation.

Statistical Modelling Chapter XI


Generalized inverses
Generalized inverses expectation models

  • Definition XI.5: Let A be an nq matrix. A qn matrix A- such that

    AA-A = A

    is called a generalised inverse (g-inverse for short) for A.

  • Any matrix A has a generalized inverse but it is not unique unless A is nonsingular, in which case A-=A-1.

Statistical Modelling Chapter XI


Example xi 3 generalized inverse of a 2 2 matrix
Example XI.3 Generalized inverse of a 2 expectation models2 matrix

It is easy to see that the following matrices are also generalized inverse for A:

This illustrates that generalized inverses are not necessarily unique.

Statistical Modelling Chapter XI


An algorithm for finding generalized inverses
An algorithm for finding generalized inverses expectation models

To find a generalized inverse A- for an matrix A of rank m:

  • Find any mm minor H of A i.e. the matrix obtained by selecting any m rows and any m columns of A.

  • Find H-1.

  • Replace H in A with (H-1)'.

  • Replace all other entries in A with zeros.

  • Transpose the resulting matrix.

Statistical Modelling Chapter XI


Properties of generalized inverses
Properties of generalized inverses expectation models

  • Let A be an nq matrix of rank m with n q  m.

  • Then

    • A-A and AA- are idempotent.

    • rank(A-A) =rank(AA-) =m.

    • If A- is a generalized inverse of A, then (A-)' is a generalized inverse of A'; that is, (A-)' = (A')-.

    • (A'A)-A' is a generalized inverse of A so that A=A(A'A)-(A'A) and A'= (A'A)(A'A)-A'.

    • A(A'A)-A' is unique, symmetric and idempotent as it is invariant to the choice of a generalized inverse. Furthermore rank(A(A'A)-A') =m.

Statistical Modelling Chapter XI


Generalized inverses and the normal equations
Generalized inverses and the normal equations expectation models

  • Lemma XI.3: Let Ax=y be consistent.

    Then x=A-y is a solution to the system where A- is any generalized inverse for A.

  • Proof: not given.

Statistical Modelling Chapter XI


Generalized inverses the normal equations cont d

The ordinary least squares estimator of expectation modelsq is denoted by and is given by

  • Applying theorem XI.3 (normal eqns consistent) and lemma XI.3 (g-inverse gives soln) to the normal equations, yields as a solution of the

normal equations as claimed. (not unique — see notes)

Generalized inverses & the normal equations (cont’d)

  • Theorem XI.4: Let Y=Xq + ewhere X is an nq matrix of rank m q with n q,q is a q 1 vector of unknown parameters, e is an n 1 vector of errors with mean 0 and variance s2In.

  • Proof: In our proof for the full-rank case, showed that the LS estimator is the solution of the normal equations.

  • Nothing in our derivation of this result depended on the rank of X.

    • So the least squares estimator in the less-than-full-rank case is still a solution to the normal equations .

Statistical Modelling Chapter XI


A useful lemma

  • Lemma XI.4 expectation models: The inverse of is

  • where a and b are scalars.

  • Proof: First show that

  • Consequently,

  • so that is the inverse of

A useful lemma

  • Next theorem derives the OLS estimators for the less-than-full-rank model for the RCBD.

  • But, first a useful lemma.

Think of J matrices as “sum-everything” matrices.

Statistical Modelling Chapter XI


Ols estimators for the less than full rank model for the rcbd

  • where

    • is the b-vector of block means of Y,

    • is the (t-1)-vector of the first t -1 treatment means of Y,

    • is the mean of Y for treatment t and

    • is the grand mean of Y.

OLS estimators for the less-than-full-rank model for the RCBD

  • Theorem XI.6: Let Y be a n-vector of jointly-distributed random variables with

    y=E[Y] =XBb + XTt andvar[Y] =s2In

    where

    • b, XB, t and XT are the usual vectors and matrices.

  • Then a nonunique estimator of q, obtained by deleting the last row and column of X in computing the g-inverse,is

Statistical Modelling Chapter XI


Proof of theorem xi 6

no. times each treatment occurs in each block vector

Proof of theorem XI.6

  • and so we require a generalized inverse of X'X where X= [XBXT], so that

  • According to theorem XI.4 (OLS for LTFR),

Statistical Modelling Chapter XI


An algorithm for finding generalized inverses from before
An algorithm for finding generalized inverses (from before) vector

To find a generalized inverse A- for an matrix A of rank m:

  • Find any mm minor H of A i.e. the matrix obtained by selecting any m rows and any m columns of A.

  • Find H-1.

  • Replace H in A with (H-1)'.

  • Replace all other entries in A with zeros.

  • Transpose the resulting matrix.

Statistical Modelling Chapter XI


Proof of theorem xi 61

  • Now vectorZ'Z is clearly symmetric so that its inverse will be also. Our generalized inverse will be with a row and column of zeros added. (no need to transpose)

Proof of theorem XI.6

  • Our algorithm tells us that we must take a minor of order b+t-1 and obtain its inverse.

  • So delete any row and column and find its inverse

    • say last row and column.

  • To remove the last row and column of X'X,

    • let Z= [XBXT-1] where XT-1 is the n(t-1) matrix formed by taking the first t-1 columns of XT.

  • Then,

is X'X with its last row and column removed.

Statistical Modelling Chapter XI


Proof of theorem xi 6 continued

  • In our case

Proof of theorem XI.6 (continued)

  • Now to find the inverse of Z'Z we note that

    • for a partitioned, symmetric matrix H where

  • So first we need to find W, then V and lastly U

Statistical Modelling Chapter XI


Proof of theorem xi 6 continued1
Proof of theorem XI.6 (continued) vector

  • Note that lemma XI.4 (special inverse) is used to obtain the inverse in this derivation.

Statistical Modelling Chapter XI


Proof of theorem xi 6 continued2

  • and

Proof of theorem XI.6 (continued)

Statistical Modelling Chapter XI


Proof of theorem xi 6 continued3

  • where vector

    • XT-1 is the n(t-1) matrix formed by taking the first t -1 columns of XT,

    • xtis column t of XT,

    • is the b-vector of block means of Y,

    • is the (t-1)-vector of the first t -1 treatment means of Y,

    • is the mean of Y for treatment t and

Proof of theorem XI.6 (continued)

X'Y is a vector of totals

Total = mean  n.

Express as means for notational economy

  • Now want to partition X'Y similarly:

Statistical Modelling Chapter XI


Proof of theorem xi 6 continued4
Proof of theorem XI.6 (continued) vector

  • Hence the nonunique estimators of q are

  • Now to simplify this expression further we look to simplify

Statistical Modelling Chapter XI


Proof of theorem xi 6 continued5

where is the grand mean of all the observations.

  • Hence,

  • Similarly,

  • Next, each element of is the sum of the first (t-1) Treatment means and

  • Hence,

  • Similarly,

Proof of theorem XI.6 (continued)

Statistical Modelling Chapter XI


Proof of theorem xi 6 continued6
Proof of theorem XI.6 (continued) vector

  • Therefore

  • Summary:

  • So the estimator is a function of block, treatment and grand means, albeit a rather messy one.

Statistical Modelling Chapter XI


C estimable functions

  • That is, consider functions whose estimators are invariant to different s in that their values remain the same regardless of which solution to the normal equations is used.

c) Estimable functions

  • In the less-than-full-rank model the problem we face is that q cannot be estimated uniquely as it changes for the many different choice of (X'X)-.

  • This property is true for all estimable functions.

Statistical Modelling Chapter XI


Definition of estimable functions
Definition of estimable functions vector

  • Definition XI.6: Let E[Y]=Xq and var[Y] =s2In where X is n q of rank mq.

  • A function 'q is said to be estimable if there exists a vector c such that E[c'Y] ='q for all possible values of q.

  • This definition is saying that, to be estimable, there must be some linear combination of the random variables Y1, Y2, …, Yn that is a linear unbiased estimator of 'q.

  • In the case of several linear functions, we can extend the above definition as follows:

  • If L is a k q matrix of constants, then Lq is a set of k estimable functions if and only if there is a matrix C of constants so that E[CY] =Lq for all possible values of q.

  • However, how to identify estimable functions?

  • The following theorems give us some important cases that will cover the quantities of most interest.

Statistical Modelling Chapter XI


Expected value results
Expected value results vector

  • Lemma XI.5: Let

    • a be a k 1 vector of constants,

    • A an s k matrix of constants and

    • Y a k 1 vector of random variables or random vector.

Statistical Modelling Chapter XI


Estimable if linear combination of x
Estimable if linear combination of vectorX

  • Theorem XI.7: Let E[Y]=Xq and var[Y] =s2In where X is n q of rank mq.

  • Then 'q is an estimable function if and only if there is a vector of constants c such that '=c'X.

  • (Note that '=c'X implies that  is a linear combination of the rows of X.)

  • Moreover, for a k q matrix L, Lq is a set of k estimable functions if and only if there is a matrix C of constants such that L=CX.

Statistical Modelling Chapter XI


Proof
Proof vector

  • First suppose that 'q is estimable.

  • Then, by definition XI.6 (estimable function), there is a vector c so that E[c'Y] ='q.

  • Now, using lemma XI.5 (E rules) , E[c'Y] =c'E[Y] =c'Xq.

  • But also E[c'Y] ='q and so c'Xq = 'q.

  • Since this last equation is true for all q, follows that c'X = '.

  • Second, suppose that ' = c'X.

  • Then 'q = c'Xq = E[c'Y] and, by definition XI.6 (estimable function), 'q is estimable.

  • Similarly, if E[CY] =Lq, CXq = Lq for all q and thus CX = L.

  • Likewise, if L = CX, then Lq = CXq = E[CY] and hence Lq is estimable.

Statistical Modelling Chapter XI


Expected values are estimable
Expected values are estimable vector

  • Theorem XI.8: Let y=E[Y]=Xq and var[Y] =s2In where X is n q of rank mq.

    Each of the elements of y=Xq is estimable.

  • Proof: From theorem XI.7, Lq is a set of estimable functions if L = CX.

  • We want to estimate Xq, so that L = X

    and so to have L = CX we need to find a CX = X

    Clearly, setting C=In we have that CX = X = Land as a consequence y=Xq is estimable.

Statistical Modelling Chapter XI


A linear combination of estimable functions is itself estimable

  • Theorem XI.9 vector: Let be a collection of k estimable functions.

  • Let be a linear combination of these functions.

  • Then z is estimable.

  • Proof: If each of is estimable then, by definition XI.6 (estimable function), there exists a vector of constants ci such that

  • Then

  • where

  • and so z is estimable.

A linear combination of estimable functions is itself estimable

Statistical Modelling Chapter XI


Finding all the estimable functions
Finding all the estimable functions vector

  • First find y=Xq, all of whose rows are estimable functions.

  • It can be shown that all estimable functions can be expressed as a linear combination of the entries in y=Xq.

  • So find all linear combinations of y=Xq and you have all the estimable functions.

Statistical Modelling Chapter XI


Example xi 2 the rcbd continued
Example XI.2 The RCBD vector (continued)

  • For the RCBD, the maximal model is

    y=E[Y] =XBb + XTt.

  • So for b= 5, t= 4,

  • From this we identify the 20 basic estimable functions, bi + tk.

  • Any linear combination of these 20 is estimable and any estimable function can be expressed as a linear combination of these.

  • Hence bi – bi' and tk – tk' are estimable.

  • On the other hand, neither bi nor tk + tk' are estimable.

Statistical Modelling Chapter XI


D properties of estimable functions
d) Properties of estimable functions vector

  • So we have seen what estimable functions are and how to identify them, but why are they important?

  • Turns out that they can be unbiasedly and uniquely estimated using any solution to the normal equations and that they are, in some sense, best.

  • It is these properties that we establish in this section.

  • But first a definition.

Statistical Modelling Chapter XI


The ls estimator of an estimable function

  • Definition XI.7 vector: Let 'q be an estimable function and let be a least squares estimator of q.

  • Then is called the least squares estimator of 'q.

  • Now the implication of this definition is that is the unique estimator of 'q.

  • This is not at all obvious because there are infinitely many solutions to the normal equations when they are less than full rank.

The LS estimator of an estimable function

  • The next theorem establishes that the least squares estimator of an estimable function is unique.

Statistical Modelling Chapter XI


Recall properties of g inverses
Recall properties of g-inverses vector

  • Let A be an nq matrix of rank m with n q  m.

  • Then

    • A-A and AA- are idempotent.

    • rank(A-A) =rank(AA-) =m.

    • If A- is a generalized inverse of A, then (A-)' is a generalized inverse of A'; that is, (A-)' = (A')-.

    • (A'A)-A' is a generalized inverse of A so that A=A(A'A)-(A'A) and A'= (A'A)(A'A)-A'.

    • A(A'A)-A' is unique, symmetric and idempotent as it is invariant to the choice of a generalized inverse. Furthermore rank(A(A'A)-A') =m.

Statistical Modelling Chapter XI


Ls estimator of estimable function is unique

  • Theorem XI.10 vector: Let 'q be an estimable function.

  • Then least squares estimator is invariant to the least squares estimator .

  • That is, the expression for is the same for every expression for

  • Proof: From theorem XI.7 (estimable if '=c'X), we have '=c'X and, from theorem XI.4 (LS/LTFR),

  • Using these results we get that

  • Since, from property 5 of generalized inverses, is invariant to choice of (X'X)-, it follows that has the same expression for any choice of (X'X)-.

  • Hence is invariant to (X'X)- and hence .

LS estimator of estimable function is unique

Statistical Modelling Chapter XI


Unbiased and variance

  • Also let vector'q be an estimable function and be its least square estimator.

  • Then is unbiased for 'q.

  • Moreover, , which is invariant to choice of generalized inverse.

Unbiased and variance

  • Next we show that the least squares estimator of an estimable function is unbiased and obtain its variance.

  • Theorem XI.11: Let E[Y]=Xq and var[Y] =s2In where X is n q of rank mq.

Statistical Modelling Chapter XI


Proof of theorem xi 11

  • This verifies that is unbiased for 'q.

Proof of theorem XI.11

Statistical Modelling Chapter XI


Proof of theorem xi 11 continued
Proof of theorem XI.11(continued) vector

  • Next

Statistical Modelling Chapter XI


Unbiased only for estimable functions

  • In general,

  • which is not necessarily equal to 'q.

  • Thus the case when 'q is estimable, and hence is an unbiased estimator of it, is a very special case.

Unbiased only for estimable functions

Statistical Modelling Chapter XI


Blue best linear unbiased estimator

BLUE = Best Linear Unbiased Estimator

  • Definition XI.8: Linear estimators are estimators of the form AY, where A is a matrix of real numbers.

  • That is, each element of the estimator is a linear combination of the elements of Y.

  • The unbiased least squares estimator developed here is an example of a linear estimator with A='(X'X)-X'.

  • Unfortunately, unbiasedness does not guarantee uniqueness.

  • There might be more than one set of unbiased estimators of q.

  • Another desirable property of an estimator is that it be minimum variance.

  • For this reason the least squares estimator is called the BLUE (best linear unbiased estimator).

Statistical Modelling Chapter XI


A result that will be required in the proof of the gauss markoff theorem
A result that will be required in the proof of the Gauss-Markoff theorem

  • Lemma XI.6: Let U and V be random variables and a and b be constants. Then

  • Proof:

Note that last step uses definitions I.5 and I.6 of the variance and covariance.

Statistical Modelling Chapter XI


Gauss markoff theorem

  • Theorem XI.12 [Gauss-Markoff] Gauss-Markoff theorem: Let E[Y]=Xq and var[Y] =s2In where X is n q of rank mq. Let 'q be estimable.

  • Then the best linear unbiased estimator of 'q is where is any solution to the normal equations.

  • This estimator is invariant to the choice of .

  • The proof proceeds as follows:

    • Establish a result that will be used subsequently.

    • Prove that is the minimum variance estimator by proving that

    • Prove that is the unique minimum variance estimator.

Gauss-Markoff theorem

  • Proof: Suppose that a'Y is some other linear unbiased estimator of 'q so that E[a'Y] ='q.

Statistical Modelling Chapter XI


Proof of theorem xi 12

  • Now, from lemma XI.6 ( Gauss-Markoff theoremvar[aU + bV]), as and a'Y are scalar random variables and is their linear combination with coefficients 1 and –1, we have

Proof of theorem XI.12

  • Firstly, E[a'Y] =a'E[Y] =a'Xq, and

    since also E[a'Y] ='q,

    a'Xq='q for all q.

  • Consequently, a'X='

    this is the result that will be used subsequently.

  • Secondly, we proceed to show that

Statistical Modelling Chapter XI


Proof of theorem xi 12 continued
Proof of theorem XI.12 (continued) Gauss-Markoff theorem

(The last step uses theorem XI.11 — LS and var of est. func.)

Statistical Modelling Chapter XI


Proof of theorem xi 12 continued1

  • So, Gauss-Markoff theorem

  • Since,

  • Hence, has minimum variance amongst all linear unbiased estimators.

Proof of theorem XI.12 (continued)

Statistical Modelling Chapter XI


Proof of theorem xi 12 continued2

  • Then and hence is a constant, say c.

  • Since

so that

  • But so that

  • Now we have that c= 0 and so

  • Thus any linear unbiased estimator of 'q which has minimum variance must equal .

Proof of theorem XI.12 (continued)

  • So, provided restrict our attention to estimable functions, our estimators will be unique and BLU.

Statistical Modelling Chapter XI


E properties of the estimators in the full rank case

  • So are BLUE for q.

e) Properties of the estimators in the full rank case

  • The above results apply generally to models either of less than full rank or of full rank.

  • However, in the full rank case, some additional results apply.

  • Further, since 'q is a linear combination of q, it follows that 'q is estimable for all constant vectors .

Statistical Modelling Chapter XI


F estimation for the maximal model for the rcbd
f) Estimation for the maximal model for the RCBD so that in this particular case

  • In the next theorem, because each of the parameters in q'= [b't'] is not estimable, we consider the estimators of y.

    • theorem XI.8 tells us that these are estimable,

    • theorem XI.10 that they are invariant to the solution to the normal equation chosen and

    • theorem XI.12 that they are BLUE.

Statistical Modelling Chapter XI


Estimator of expected values for rcbd
Estimator of expected values for RCBD so that in this particular case

  • Theorem XI.13: Let Y be a n-vector of jointly-distributed random variables with

    y=E[Y] =XBb + XTt andvar[Y] =s2In

    where b, XB, t and XT are the usual vectors and matrices.

  • Then where , and are the n-vectors of block, treatment and grand means, respectively.

Statistical Modelling Chapter XI


Proof of theorem xi 13
Proof of theorem XI.13 so that in this particular case

  • Proof: Using the estimator of q, established in theorem XI.6 (LS estimator of q in RCBD), the estimator of the expected values is given by

Statistical Modelling Chapter XI


Proof of theorem xi 13 continued
Proof of theorem XI.13 (continued) so that in this particular case

  • Now each row of [XBXT-1] corresponds to 1 obs:say obs. in ith block that received jth treat.

  • In the row for obs. in ith block that received jth treat:

    • the element in the ith column of XB will be 1,

    • as will the element in the jth column of XT-1;

    • all other elements in the row will be zero.

  • Note if treatment is t, only nonzero element will be ith column of XB.

Put B-bar-.. and T-bar … vectors on board and consider what get for say observations 7 and 8 when evaluate expected value expression

Statistical Modelling Chapter XI


Proof of theorem xi 13 continued1

  • So from find estimator of expected value for obs. in ith block with jth treat is the sum of:

    • ith element of and

    • jth element of ,

    • unless treat is t (j = t) when it is ith element of

  • Estimator of expected value for obs.

    • with treatment t in ith block:

    • with jth treatment (j≠ t) in the ith block:

That is, for all i and j, and clearly

Proof of theorem XI.13 (continued)

Statistical Modelling Chapter XI


Notes on estimator for rcbd

  • Now, find estimator of expected value for obs. in

Notes on estimator for RCBD

  • That is the estimator of the expected values can be written as the sum of the grand mean plus main effects.

  • Note that the estimator is a function of means with

  • where Xi=XB, XT or XG.

  • That is, MB, MT and MG are the block, treatment and grand mean operators, respectively.

  • Further, if the data in the vector Y has been arranged in prerandomized order with all the observations for a block placed together, the operators are:

Statistical Modelling Chapter XI


Example xi 2 the rcbd continued1
Example XI.2 The RCBD find estimator of expected value for obs. in (continued)

  • For the estimable function y23=b2 + t3,

  • In this case, the mean operators are as follows:

Statistical Modelling Chapter XI


Example xi 2 the rcbd continued2
Example XI.2 The RCBD find estimator of expected value for obs. in (continued)

Statistical Modelling Chapter XI


Example xi 2 the rcbd continued3
Example XI.2 The RCBD find estimator of expected value for obs. in (continued)

Statistical Modelling Chapter XI


XI.C Generalized least squares (GLS) estimation of the expectation parameters in general linear models

  • Define a GLS estimator.

  • Definition XI.9: Let Y be a random vector with y=E[Y]=Xq and var[Y] =V where X is an n q matrix of rank mq, q is a q 1 vector of unknown parameters, V is an n n positive-definite matrix and nq.

  • Then the generalized least squares estimator of q is the estimator that minimizes the "sum of squares"

Statistical Modelling Chapter XI


Gls estimator
GLS estimator expectation parameters in general linear models

  • Theorem XI.14: Let Y be a random vector with y=E[Y]=Xq and var[Y] =V where X is an n q matrix of rank mq, q is a q 1 vector of unknown parameters, V is an n n positive-definite matrix and nq.

  • Then the generalized least squares estimator of q is denoted by and is given by

  • Proof: similar to proof for theorem XI.1.

  • Unsurprisingly, properties of GLS estimators parallel those for OLS estimators.

  • In particular, they are BLUE as the next theorem states.

Statistical Modelling Chapter XI


Gauss markoff for gls
Gauss-Markoff for GLS expectation parameters in general linear models

  • Theorem XI.15 [Gauss-Markoff]: Let Y be a random vector with

    y=E[Y]=Xq and var[Y] =V

    where X is an n q matrix of rank mq, q is a q 1 vector of unknown parameters, V is an n n positive-definite matrix and nq.

    Let 'q be an estimable function.

    Then the generalized least squares estimator

  • is the best linear unbiased estimator of 'q, with

  • Proof: not given

Statistical Modelling Chapter XI


Estimation of fixed effects for the rcbd when blocks are random
Estimation of fixed effects for the RCBD when Blocks are random

  • We now derive the estimator of t in the model for the RCBD where Blocks are random.

  • Symbolically, the model is

    E[Y] = Treats and var[Y] = Blocks + BlocksUnits.

  • One effect of making Blocks random is that the maximal, marginality-compliant model for the experiment is of full rank and so the estimator of t will be BLUE.

  • Before deriving estimators under this model, we establish the properties of a complete set of mutually-orthogonal idempotents as they will be useful in proving the results.

Statistical Modelling Chapter XI


Complete set of mutually orthogonal idempotents

where and otherwise. random

Complete set of mutually-orthogonal idempotents

  • Definition XI.10: A set of idempotents {QF: F } forms a complete set of mutually-orthogonal idempotents (CSMOI) if

  • Of particular interest is that the set of Q matrices from a structure formula based on crossing and nesting relationships forms a CSMOI.

  • e.g. RCBD

Statistical Modelling Chapter XI


Inverse of csmoi
Inverse of CSMOI random

  • Lemma XI.7: For a matrix that is the linear combination of a complete set of mutually-orthogonal matrices, such as ,

    its inverse is

  • Proof:

Statistical Modelling Chapter XI


Estimators for only treatments fixed
Estimators for only Treatments fixed random

  • In the following theorem we show that the GLS estimator for the expectation model when only Treatments is fixed is the same as the OLS estimator for a model involving only fixed Treatments, such as would occur with a CRD.

  • However, first a useful lemma.

  • Lemma XI.8: Let A, B, C and D be square matrices and a, b and c be scalar constants. Then,

provided A and C, as well as B and D, are conformable.

Statistical Modelling Chapter XI


Estimators for only treatments fixed cont d

and the GLS estimator of the expected values is

where

Estimators for only Treatments fixed (cont’d)

  • Theorem XI.16: Let Y be an random vector for the observations from an RCBD with the Yijk arranged such that those from the same block occur consecutively in treatment order as in a prerandomized layout.

  • Also, let

Statistical Modelling Chapter XI


Proof of theorem xi 16

Proof of theorem XI.16

  • From theorem XI.14 (GLS estimator), the GLS estimator of t is given by

the generalized inverse being replaced by an inverse here because XT is of full rank.

Statistical Modelling Chapter XI


Proof of theorem xi 16 continued
Proof of theorem XI.16 (continued) random

  • We start by obtaining an expression for V in terms of QG, QB and QBU as these form a complete set of mutually orthogonal idempotents.

  • Now,

  • But, from section IV.C, Hypothesis testing using the ANOVA method for an RCBD, we know that

  • so that

Statistical Modelling Chapter XI


Proof of theorem xi 16 continued1

where random

Proof of theorem XI.16 (continued)

  • Consequently

  • Since QG, QB and QBU form a complete set of mutually orthogonal idempotents, we use lemma XI.7 (CSMOI inverse) to obtain

we have our expression for V-1.

Statistical Modelling Chapter XI


Proof of theorem xi 16 continued2

  • so that

  • We then see that

Proof of theorem XI.16 (continued)

Statistical Modelling Chapter XI


Proof of theorem xi 16 continued3

  • Thirdly, to find the inverse of this: that

    • note that and are two symmetric idempotent matrices whose sum is I and whose product is zero

    • so that they form a complete set of mutually orthogonal idempotents — see also lemma XI.4 (special inverse).

Proof of theorem XI.16 (continued)

  • Now

Statistical Modelling Chapter XI


Proof of theorem xi 16 continued4
Proof of theorem XI.16 (continued) that

  • Finally,

Statistical Modelling Chapter XI


Proof of theorem xi 16 continued5

and, from corollary XI.1 (expected values for single-term model),

  • So the estimator of the expected values under the model

are just the treatment means, the same as for the simple linear model

Proof of theorem XI.16 (continued)

  • The GLS estimator of the expected values is then given by

  • This result is generally true for orthogonal experiments, as stated in the next theorem that we give without proof.

Statistical Modelling Chapter XI


Orthogonal experiments
Orthogonal experiments that

  • Definition XI.11: An orthogonal experiment is one for which QFQF* equals either QF, QF* or 0 for all F, F* in the experiment.

  • All the experiments in these notes are orthogonal.

  • Theorem XI.17: For orthogonal experiments with a variance matrix that can be expressed as a linear combination of a complete set of mutually-orthogonal idempotents, the GLS estimator is the same as the OLS estimator for the same expectation model.

  • Proof: not given

  • This theorem applies to all orthogonal experiments for which all the variance terms correspond to unrandomized generalized factors

Statistical Modelling Chapter XI


Xi d maximum likelihood estimation of the expectation parameters
XI.D Maximum likelihood estimation of the expectation parameters

  • Maximum likelihood estimation: based on obtaining the values of the parameters that make the data most 'likely'.

  • Least squares estimation minimizes the sum of squares of the differences between the data and the expected values.

  • Need likelihood function to maximize.

Statistical Modelling Chapter XI


The likelihood function
The likelihood function parameters

  • Definition XI.12: The likelihood function is the joint distribution function, f(y; ), of n random variables Y1, Y2, …, Yn evaluated at y= (y1, y2, …, yn).

  • For fixed y the likelihood function is a function of the k parameters, = (1, 2, …, n), and is denoted by L(; y).

  • If Y1, Y2, …, Yn represents a random sample from f(y; ) then L(; y) =f(y; )

  • Notation f(y; ) indicates that

    • y is the vector of variables for the distribution function

    • the values of  are fixed for the population under consideration.

  • This is read ‘the function f of y given ’.

  • The likelihood function reverses the roles:

    • the variables are and y is considered fixed

    • so we have the likelihood of given y.

Statistical Modelling Chapter XI


Maximum likelihood estimates

Maximum likelihood estimates

  • Definition XI.13: Let L(; y) =f(y; ) be the likelihood function for Y1, Y2, …, Yn.

  • An expression for this estimate as a function of y can be derived.

  • The maximum likelihood estimators are defined to be the same function as the estimate, with Y substituted for y.

Statistical Modelling Chapter XI


Procedure for obtaining the maximum likelihood estimators
Procedure for obtaining the maximum likelihood estimators parameters

  • Write the likelihood function L(; y) as the joint distribution function for the n observations.

  • Find =ln[L(; y)].

  • Maximize  with respect to  to derive the maximum likelihood estimates.

  • Obtain the maximum likelihood estimators.

  • The quantity  is referred to as the log likelihood.

  • Maximizing  is equivalent to maximizing L(; y) since ln is a monotonic function.

  • It will turn out that the expression for  is often simpler than that for L(; y).

Statistical Modelling Chapter XI


Probability distribution function
Probability distribution function parameters

  • We have not previously specified a probability distribution function to be used as part of the model.

  • However, need one for maximum likelihood estimation.(least squares estimation does not).

  • A distribution function commonly used for continuous variables is the multivariate normal distribution.

  • Definition XI.14: The multivariate normal distribution function for is

  • Now the general linear model for a response variable is

  • where

    • Y is a random vector, y=E[Y]=Xq,

    • X is an n q matrix of rank mq with n≥q,

    • q is a q 1 vector of unknown parameters, and var[Y] =V.

Statistical Modelling Chapter XI


Deriving the maximum likelihood estimators of q
Deriving the maximum likelihood estimators of parametersq

  • Theorem XI.18: Let Y be a random vector whose distribution is multivariate normal.

  • Then, the maximum likelihood estimator of q is denoted by and is given by

Statistical Modelling Chapter XI


Proof of theorem xi 18
Proof of theorem XI.18 parameters

The likelihood functionL(; y)

  • Given the distribution function for Y, the likelihood function L(; y) is

  • Find =ln[L(; y)]

  • Given the distribution function for Y, the likelihood function L(; y) is

Statistical Modelling Chapter XI


Proof of theorem xi 18 continued
Proof of theorem XI.18 (continued) parameters

Maximize  with respect to q

  • The maximum likelihood estimates of are then obtained by maximizing  with respect to , that is, by differentiating  with respect to and setting the result equal to zero.

  • Clearly, the maximum likelihood estimates will be the solution of

  • Now,

Statistical Modelling Chapter XI


Proof of theorem xi 18 continued1

Proof of theorem XI.18 (continued)

  • and applying the rules for differentiation given in the proof of theorem XI.1, we obtain

  • Setting this derivative to zero we obtain

So for general linear models the maximum likelihood and generalized least squares estimators coincide.

Statistical Modelling Chapter XI


Xi e estimating the variance

XI.E Estimating the variance

a) Estimating the variance for the simple linear model

  • Generally, it is unknown and has to be estimated.

  • Now the definition of the variance is

  • var[Y] = E[(Y – E[Y])2].

  • The logical estimator is one that parallels this definition:

    • put in our estimate for E[Y] and

    • replace the first expectation operator by the mean.

Statistical Modelling Chapter XI


Fitted values and residuals

  • The and so to determine the variance of our OLS estimates need to know residuals are:

    • the deviations of the observed values of the response variable from the fitted values.

    • denoted by

    • the estimates of .

Fitted values and residuals

  • Definition XI.15: The fitted values are:

    • the estimated expected values for each observation, .

    • obtained by substituting the x-values for each observation into the estimated equation.

    • given by .

Statistical Modelling Chapter XI


Estimator of s 2

Estimator of s2

  • and our estimate would be ee/n.

  • Notice that the numerator of this expression measures the spread of observed Y-values from the fitted equation.

  • We expect this to be random variation.

  • Turns out that this is the ML estimator as claimed in the next theorem.

  • Theorem XI.19: Let Y be a normally distributed random vector representing a random sample with E[Y]=Xq and var[Y] =VY=s2In where X is an n q matrix of rank mq with n≥q, and q is a q 1 vector of unknown parameters.

  • The logical estimator of s2 is:

Proof: left as an exercise

Statistical Modelling Chapter XI


Is the estimator unbiased

  • Then and so to determine the variance of our OLS estimates need to know

  • so an unbiased estimator of s2 is

Is the estimator unbiased?

  • Theorem XI.20: Let

  • with

    • Y the n 1 random vector of sample random variables,

    • X is an n q matrix of rank mq with n≥q,

    • is a q 1 vector of estimators of q.

  • Proof: The proof of this theorem is postponed.

  • Latter estimator indicates how we might go more generally,

    • once it is realized that this estimator is one that would be obtained using the Residual MSQ from an ANOVA based on simple linear models.

Statistical Modelling Chapter XI


B estimating the variance for the general linear model
b) Estimating the variance for the general linear model and so to determine the variance of our OLS estimates need to know

  • Generally, the variance components (s2s) that are the parameters of the variance matrix, V, can be estimated using the ANOVA method.

  • Definition XI.16: The ANOVA method for estimating the variance components consists of equating the expected mean squares to the observed values of the mean squares and solving for the variance components.

Statistical Modelling Chapter XI


Example iv 1 penicillin yield

so that

Example IV.1 Penicillin yield

  • This example involves 4 treatments applied using an RCBD employing 5 Blends as blocks.

  • Suppose it was decided that the Blends are random.

  • The ANOVA table, with E[MSq]s, for this case is

  • That is, the magnitude of the two components of variation is similar.

Statistical Modelling Chapter XI


Example iv 1 penicillin yield1

(see variance matrix in section XI.A, Linear models for designed experiments, above),

  • so that

Example IV.1 Penicillin yield

  • Clearly, the two components make similar contributions to the variance of an observation.

Statistical Modelling Chapter XI


Xi g exercises
XI.G Exercises

  • Ex. 11-1 asks you to provide matrix expressions for models and the estimator of the expected values.

  • Ex. 11-2 involves a generalized inverse and its use.

  • Ex. 11-3 asks you to derive the variance of the estimator of the expected values.

  • Ex. 11-4 involves identifying estimable parameters and the estimator of expected values for a factorial experiment.

  • Ex. 11-5 asks for the E[Msq], matrix expressions for V and estimates of variance components.

  • Ex. 11-6 involves max. likelihood estimation (not covered).

  • Ex. 11-7 requires the proofs of some properties of Q and M matrices for the RCBD.

Statistical Modelling Chapter XI


ad