- 84 Views
- Uploaded on
- Presentation posted in: General

STATISTICS FOR LAWYERS

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

STATISTICS FOR LAWYERS

“Nonparametric” Statistics

“Distribution-Free” Statistics

- Treat data as nominal
- Chi Square Tests
- Binomial
- Sign test (for matched samples)

- Use specialized tests that do not make assumptions about the underlying distribution of the data
- If sample size is sufficient, replace original values with ranks, and run traditional tests
- Variance is known for uniform distribution

- Large samples
- statistics themselves will conform to central limit theorem, and have a known standard deviation
- Can use normal theory to test hypotheses

- Small samples
- Exact tables available in specialized text books
- Spiegel, Nonparametric Statistics

- Some tables may be available online

- Exact tables available in specialized text books

- Dichotomize data
- Treat as coin flip
- p=?

- Split at median is a common approach
- Example
- Students who took the stats workshop
- in number in top half (top quarter?) of their class

Large Sample:

R = 17

critical value from table is 9 (or less)

(16 males and 12 Females)

E{R} = 15

σ{R} = sqrt(27)/2 = 2.6

Z = (12-15)/2.6) = -1.15

Uniform Distribution

Central Limit Theorem applies and variance is known

Number of Hours Devoted to Civil Cases by Hourly Fee Lawyers

All Cases

Drop cases requiring more than 500 hours

Drop cases requiring more than 100 hours

State Federal

- Assign ranks
- Run t-test using ranks as the variable

Compare Lawyer Effort for Federal and State Cases

State Federal

Alternative to the two sample t-test

- Identify which group is smaller, and rank from low to high or high to low so that group has the smaller ranks
- Compute W by summing the ranks of smaller group
- If n is large enough (i.e., both samples are 10 or more, W will have a normal distribution
- If n is small, exact tables are available.

State Federal

Taking the logarithm will sometimes cure nonnomality issues

If there are values of 0, need to add 1 to do log

State Federal

t-test on original data produced a t of -5.590

Rank tests would not change using log transform because the log transformed data are monotonically identical to the original data.

State Federal

- Tests that rely on means can be substantially influenced by a small number of extreme values
- The log transform is one way to reduce the influence of outliers
- A second approach is to use the ranks rather than the original values

State Federal

State Federal

State Court

Federal Court

Alternative to the matched pair t-test

beforeafterdifferrank

7665-11(-)6

3224-8(-)5

6570+5(+)4

8785-2(-)1

2225+3(+)2.5

912+3(+)2.5

3725-12(-)7

T = 2.5 + 2.5 + 4 = 9

critical value at .05 level is 2

Alternative to one analysis of variance

Ni = number of observations in ith group

M = number of sets of ties

Tj = tj3 - tj

tj = number of observations tied for the jth set of ties

Replace original values of each variable with ranks, and then compute Pearson’s product moment correlation using the ranks as the data.

- Rank observations separately on each variable
- Look at each pair of observations
- call one observation a and the other b
- if observation a has a lower rank than observation b for both variables, pair is concordant
- if observation a has a higher rank than observation b for both variables, pair is concordant
- Otherwise pair is discordant

C is number of concordant pairs; D is number of discordant pairs

Specialized version to use with contingency table formed from two ordinal variables

gamma

Somer’s assymetric D

- “Grouped Ordinal” vs. ranks
- True ranks get used as if they were interval subject of distributional assumptions of maximum likelihood
- Grouped ordinal (i.e., with a small number of values, e.g., 1,2,3,4,5) can be dealt with differently

- Ordinal predictors
- Ordinal dependent variables

- Test for “linearity”
- fit as an interval variable (e.g., values 1 to 5)
- fit as a set of dummy variables
- actually, add k-2 (not k-1) dummies to the model with the interval version
- test whether dummies significantly improve fit

- compare fit (i.e., does the set of dummies fit significantly better)

- Choose form based on test of linearity

- Assume that there is an interval scale Y* underlying an observed grouped ordinal variable Y
- e.g., “complexity” measured on a 5 point scale

- Estimate regression model on the underlying scale along with the cut points (τ’s) that define the grouping

- inf

+ inf

τ1

τ2

τ3

τ4

Y*

Y=1

Y=2

Y=3

Y=4

Y=5

- Assume that Y* has a standard normal distribution
- β’s can be interpreted as change in standard deviations in Y*

- Estimation is done by finding the β’s and τ’s that maximize the probability of observing the sample
- Constraint:

- inf

+ inf

τ1

τ2

τ3

τ4

Y*

Y=1

Y=2

Y=3

Y=4

Y=5