1 / 51

# Experimental Design, Statistical Analysis - PowerPoint PPT Presentation

Experimental Design, Statistical Analysis. CSCI 4800/6800 University of Georgia March 7, 2002 Eileen Kraemer. Research Design. Elements: Observations/Measures Treatments/Programs Groups Assignment to Group Time. Observations/Measure. Notation: ‘O’ Examples: Body weight

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

## PowerPoint Slideshow about 'Experimental Design, Statistical Analysis' - magdalena-fajardo

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

### Experimental Design, Statistical Analysis

CSCI 4800/6800

University of Georgia

March 7, 2002

Eileen Kraemer

• Elements:

• Observations/Measures

• Treatments/Programs

• Groups

• Assignment to Group

• Time

• Notation: ‘O’

• Examples:

• Body weight

• Time to complete

• Number of correct response

• Multiple measures: O1, O2, …

• Notation: ‘X’

• Use of medication

• Use of visualization

• Use of audio feedback

• Etc.

• Sometimes see X+, X-

• Each group is assigned a line in the design notation

• R = random

• N = non-equivalent groups

• C = assignment by cutoffs

• Moves from left to right in diagram

• True experiment – random assignment to groups

• Quasi experiment – no random assignment, but has a control group or multiple measures

• Non-experiment – no random assignment, no control, no multiple measures

Pretest-posttest treatment

comparison group

randomized experiment

Pretest-posttest

Non-Equivalent Groups

Quasi-experiment

Posttest Only

Non-experiment

• Goal:to be able to show causality

• First step: internal validity:

• If x, then y

AND

• If not X, then not Y

• Two-group, posttest only, randomized experiment

Compare by testing for differences between means of groups, using t-test or one-way Analysis of Variance(ANOVA)

Note: 2 groups, post-only measure, two distributions each with mean and variance, statistical (non-chance) difference between groups

• What do we mean by a difference?

• Independent t-test

• One-way Analysis of Variance (ANOVA)

• Regression Analysis (most general)

• equivalent

Solve overdetermined system of equations for β0 and β1, while minimizing sum of e-terms

• Compares differences within group to differences between groups

• For 2 populations, 1 treatment, same as t-test

• Statistic used is F value, same as square of t-value from t-test

• Signal enhancers

• Factorial designs

• Noise reducers

• Covariance designs

• Blocking designs

• Factor – major independent variable

• Level – subdivision of a factor

• Setting= in_class, pull-out

• Time_on_task = 1 hour, 4 hours

• Design notation as shown

• 2x2 factorial design (2 levels of one factor X 2 levels of second factor)

• Null case

• Main effect

• Interaction Effect

• Regression Analysis

• ANOVA

• Analysis of variance – tests hypotheses about differences between two or more means

• Could do pairwise comparison using t-tests, but can lead to true hypothesis being rejected (Type I error) (higher probability than with ANOVA)

• Example:

• Effect of intensity of background noise on reading comprehension

• Group 1: 30 minutes reading, no background noise

• Group 2: 30 minutes reading, moderate level of noise

• Group 3: 30 minutes reading, loud background noise

• One factor (noise), three levels(a=3)

• Null hypothesis: 1 =2 =3

• If all sample sizes same, use n, and total N = a * n

• Else N = n1 + n2 +n3

• Normal distributions

• Homogeneity of variance

• Variance is equal in each of the populations

• Random, independent sampling

• Still works well when assumptions not quite true(“robust” to violations)

• Compares two estimates of variance

• MSE – Mean Square Error, variances within samples

• MSB – Mean Square Between, variance of the sample means

• If null hypothesis

• is true, then MSE approx = MSB, since both are estimates of same quantity

• Is false, the MSB sufficiently > MSE

• Use sample means to calculate sampling distribution of the mean,

= 1

• Sampling distribution of the mean * n

• In example, MSB = (n)(sampling dist) = (4) (1) = 4

• Depends on ratio of MSB to MSE

• F = MSB/MSE

• Probability value computed based on F value, F value has sampling distribution based on degrees of freedom numerator (a-1) and degrees of freedom denominator (N-a)

• Lookup up F-value in table, find p value

• For one degree of freedom, F == t^2

• Three significance tests

• Main factor 1

• Main factor 2

• interaction

• 3 levels of dosage (0, 100, 200 mg)

• 2 levels of task (simple, complex)

• 2x3 factorial design, 8 subjects/group

SOURCE df Sum of Squares Mean Square F p

Task 1 47125.3333 47125.3333 384.174 0.000

Dosage 2 42.6667 21.3333 0.174 0.841

TD 2 1418.6667 709.3333 5.783 0.006

ERROR 42 5152.0000 122.6667

TOTAL 47 53738.6667

• Sources of variation:

• Dosage

• Interaction

• Error

• Sum of squares (as before)

• Mean Squares = (sum of squares) / degrees of freedom

• F ratios = mean square effect / mean square error

• P value : Given F value and degrees of freedom, look up p value

• Mean time to complete task was higher for complex task than for simple

• Effect of dosage not significant

• Interaction exists between dosage and task: increase in dosage decreases performance on complex while increasing performance on simple