linear beta pricing models cross sectional regression tests l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Linear beta pricing models: cross-sectional regression tests PowerPoint Presentation
Download Presentation
Linear beta pricing models: cross-sectional regression tests

Loading in 2 Seconds...

play fullscreen
1 / 19

Linear beta pricing models: cross-sectional regression tests - PowerPoint PPT Presentation


  • 373 Views
  • Uploaded on

Linear beta pricing models: cross-sectional regression tests. FINA790C Spring 2006 HKUST. Motivation. The F test and ML likelihood ratio test are not without drawbacks: We need T > N To solve this we could form portfolios (but this is not without problems)

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Linear beta pricing models: cross-sectional regression tests' - nen


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
motivation
Motivation
  • The F test and ML likelihood ratio test are not without drawbacks:
  • We need T > N To solve this we could form portfolios (but this is not without problems)
  • When the model is rejected we don’t know why (e.g. do expected returns depend on factor loadings or on characteristics?)
cross sectional regression
Cross-sectional regression
  • Can we use the information from the whole cross-section of stock returns to test linear beta pricing models?
  • Fama-MacBeth two-pass cross-sectional regression methodology
    • Estimate each asset’s beta from time-series regression
    • Cross-sectional regression of asset returns on constant, betas (and possibly other characteristics)
    • Run cross-sectional regressions each period, average coefficients over time
linear beta pricing model
Linear beta pricing model
  • At time t the returns on the N securities are Rt = [R1t R2t … RNt]’ with variance matrix R
  • Let ft = [f1t … fKt]’ be the vector of time-t values taken by the K factors with variance matrix f
  • The linear beta pricing model is E[Rit] = λ0 + λ’βi for i=1, … ,N or

E[Rt] = λ01 + Bλ

where B = E[(Rt-E(Rt))(ft-E(ft))’]f

return generating process
Return generating process
  • From the definition of B the time series for Rt is

Rt = E[Rt] + B( ft-E(ft) ) + ut

with E[ut] = 0 and E[utft’] = 0NxK

  • Imposing linear beta pricing model gives

Rt = λ01 + B(ft-E(ft)+λ) + ut

csr method description
CSR method:description
  • Define γ = [λ0λ’]’ ( (K+1)x1 vector ) and

X = [ 1 B ] ( N x (K+1) matrix )

  • Assume N > K and rank(X) = K+1
  • Then E[Rt] = [ 1 B ][λ0λ’]’ = X γ
csr first pass
CSR – first pass
  • In first step we estimate f and B through usual estimators

f* = [(1/T)(ft–μf*)(ft–μf*)’]

μf* = (1/T) ft

B* = [(1/T)(Rt–μR*)(ft–μf*)’]f*-1

μR* = (1/T) Rt

  • In practice we can use rolling estimation period prior to testing period
csr second pass
CSR - second pass
  • In second step, for each t = 1, … , T we use the estimate B* of the beta matrix and do cross-sectional regression of returns on estimated B

γ* = (X*’Q*X*)1X*’Q*Rt (for feasible GLS with

weighting matrix Q*)

where X* = [1 B* ]

  • The time-series average is

γ** = (1/T)(X*’Q*X*)1X*’Q*Rt

= (X*’Q*X*)1X*’Q* μR*

fama macbeth ols
Fama-MacBeth OLS
  • Fama-MacBeth set Q = IN and

γOLS* = (X*’X*)1X*’Rt

  • The time-series average is

γOLS** = (X*’X*)1X*’μR*

  • And the variance of γOLS* is given by

(1/T)  (γOLS* - γOLS** )(γOLS* - γOLS** )’

issues in csr methodology
Issues in CSR methodology
  • Don’t observe true beta B, but measured B* with error: what is effect on sampling distribution of estimates?
  • How is CSR methodology related to maximum likelihood methodology?
sampling distribution of
Sampling distribution of γ**
  • Let D = (X’QX)-1X’Q, X = [ 1 B ]
  • Basic Result If (Rt’, ft’)’ is stationary and serially independent then under standard assumptions, as T→∞, √T(γ** -γ) converges in distribution to a multivariate normal with mean zero and covariance

V = DRD’ + DΠD’ - D(Γ + Γ’)D’

where does v come from
Where does V come from?
  • Write μR* = X*γ + (μR* - E(Rt)) – (B*-B)λ
  • So √T( γ** -γ) =

(X*’Q*X*)-1X*’Q* √T(μR* - E(Rt))

- (X*’Q*X*)-1X*’Q* √T(B* - B) λ

  • Error in estimating γcomes from:
    • Using average rather than expected returns
    • Using estimated rather than true betas
comparing v to fama macbeth variance estimator
Comparing V to Fama-MacBeth variance estimator
  • From the definition of γOLS* its asymptotic variance is

(X’X) -1X’RX(X’X)-1 = DRD’

  • So in general the Fama-MacBeth standard errors are incorrect because of the errors-in-variables problem
special case conditional homoscedasticity of residuals given factors
Special case: conditional homoscedasticity of residuals given factors
  • Suppose we also assume that conditional on values of the factors ft, the time-series regression residuals ut have zero expectation, constant covariance U and are serially uncorrelated
  • This will hold if (Rt’, ft’)’ is iid and jointly multivariate normal
asymptotic variance for special case
Asymptotic variance for special case
  • Recall γ = [λ0λ’]’ ( (K+1)x1 vector ) and define the (K+1)x(K+1) bordered matrix

f† = 0 0’K

0K f

  • Then Basic Result holds with

V = f† + (1+ λ’f-1λ)DUD’

Asymptotically valid standard errors are obtained by substituting consistent extimates for the various parameters

example sharpe lintner black capm
Example: Sharpe-Lintner-Black CAPM
  • For k=1, this simplifies to:
  • The usual Fama-MacBeth variance estimator (ignoring estimation error in betas) understates the correct variance except under the null hypothesis that λ1 (market risk premium) = 0
maximum likelihood and two pass csr
Maximum likelihood and two-pass CSR
  • MLE estimates B and γ simultaneously and thereby solves the errors-in-variables problem.
  • Asymptotic covariance matrix of the two-pass cross-sectional regression GLS estimator γ** is the same as that for MLE
  • I.e. two-pass GLS is consistent and asymptotically efficient as T→∞
two pass gls
Two-pass GLS
  • For given T, as N →∞ however, the two-pass GLS estimator still suffers from an errors-in-variables problem from using B* (i.e. two-pass GLS is not N-consistent)
  • We can make the two-pass GLS estimator N-consistent as well through a simple modification (see Litzenberger and Ramaswamy (1979), Shanken (1992))
modified two pass csr
Modified two-pass CSR
  • For example: Sharpe-Lintner-Black CAPM estimated with two-pass OLS
  • The errors-in-variable problem applies to betas, or the lower right-hand block of X*’X*. Note that

E(β*’β*) = β’β + tr(U)/(TσM2*)

  • So deduct the last term from the lower-right hand block; this adjustment corrects for the EIV problem as N →∞.