- 44 Views
- Uploaded on
- Presentation posted in: General

An Introduction to Things to Come!

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

An Introduction to Things to Come!

- The details on the class can be found here: www.stanford.edu/class/hrp259
- Class participation plus lots of little assignments and 2 exams determines your final grade.
- If you send me a computer virus or other malicious code (even unintentionally) you will fail the course.
- Visit ess.stanford.edu to get a virus scanner and keep it up to date by clicking LiveUpdate….

- Dozens of new viruses are unleashed on the world every day. Please update your virus definitions and scan everything before you send it to me.

- Either the world is driven completely by random chance events (and your best bet for predicting the future is using Tarot cards or a Magic 8 Ball™) or there are detectable patterns in the world.
- If you talk to a preschool teacher or a PhD in math, they will tell you that math is all about pattern detection.

- To predict the future using a deterministic model, you would say that if event Ahappens then Bwill happen. For example, in a deterministic world, if you know that a baby gestates for exactly 280 days (40 weeks), its weight will be exactly 7 ½ lbs (3.2 kg). We know that in reality it is typically in the range of 5.5–10 pounds (2.7–4.6 kg).
- For better or for worse, we do not live in a deterministic (Orwellian 1984ish) world and we can not usually make EXACT predictions in medicine.

Weeks of Gestation

(in weeks)

Weight at birth

(in lbs)

…

29 weeks

38 weeks

39 weeks

40 weeks

- You want to be able to fill in a table like this:
…and express it with a simple formula like this:

lbs = weeks * something

- The process of going from a single predictors or a set of predictors to a predicted outcome is called statistical modeling.
- People get far too excited about figuring out which statistic (with accompanying p-values anxiety) to use for the factors that are used in models.
- Today I want to talk about the process of building and using the most common models that you will see in medicine.

- The predictors and the outcomes can be on a continuous scale (time in days) or categorical factors (mom smoked, yes or no).
- Generally we try to use all the information available when we make a prediction about the future.
- The amount of blood ejected each time the heart beats (continuous scale) as opposed to whether or not the heart is beating
- The number of cancer cells seen on a slide (or the presence or absence of malignant cells)

- The models we build are remarkably similar regardless of whether we have categorical or continuous outcomes.

- All the models I learned in school were formulated at their core like this:
Outcome = baseline + predictor + predictor

- The math can get ugly very quickly depending on the properties of the outcome (continuous, count, categories) but the core idea is that these models are all using additive contributions from some predictors!

Impact of

time

Impact of being a

smoker

Baby’s

Weight

some number

Weeks * a number

a number

- You will be faced with the HARD question of how many predictors you have.
- Stop and think about the LEVELS of your predictors. If you have 100 births in a dataset and 10 variables, many of which are categorical, you can quickly find yourself making predictions based on one or two births.
- Male babies (50%) from smoking mothers (20%) where this is the mom’s 2nd or later birth (35%) = SMALL numbers.
- From100 in the sample, you end up with 5.25 births in this level.
- Do you want to make generalizations to the WORLD based on 5 children?

- Stop and think about the LEVELS of your predictors. If you have 100 births in a dataset and 10 variables, many of which are categorical, you can quickly find yourself making predictions based on one or two births.
- Are the factors correlated?
- Are some cheap to measure?
- Can you use some as proxies for others?

- One of my favorite statistics books is Michael J. Crawley’s Statistical Computing. He says:
- All models are wrong.
- Some models are better than others.
- The correct model can never be known with certainty.
- The simpler the model, the better it is.

- Predicts some outcomes poorly
- Is strongly influenced by a small number of data points
- Shows systematic patterns in how it fails to predict

- Think about the extremes of what you could use when making a model:
- Null model – use the mean of everyone
- A theoretically minimally adequate model
- Current model you have specified
- A theoretical maximal model with every predictor
- Saturated model
- Every predictor and every interaction
- Have everyone predict themselves

- Sometimes factors in a model behave weird together. A mad scientist invented a love potion and she looked at happiness in the two genders and people on and off her drug. There was a huge increase in happiness if you were male and on drug but minimal impact of gender or drug by themselves.

- Imagine that are two factors that impact a man’s risk of prostate cancer, being Japanese and living in Japan when you are a pre-teen. If you are Japanese, your risk of prostate cancer is lower than for men of any other race. If you were raised in Japan, your risk is lower than for men raised in the USA.
- The interaction term would measure if there is extra protection for being both Japanese and having been raised in Japan.

- IsJapanese = 0 or 1 (1= yes and 0 = no)
- IsRaisedJapan = 0 or 1
- IsJapanese * isRaisedJapan = ??

- You can keep adding more and more predictors to a model but the price is the loss of generalizability. Will you get another child who is exactly that weight, who’s mother smoked that much, of that ethnicity, that gender, whose father weighed that much, mother weighed that much, etc….?
- Modeling methods compare the models and frequently use criteria that penalize for extra factors (AIC criteria).

- I see modeling as having two goals.
- Estimate parameters.
- How much weight gain occurs each week as a baby is developing?

- Estimate how well it describes your data.
- How far off will my guess be when I predict the next child?
- Are there regions where my guesses are far off, like premature or late deliveries?
- Is there a lot of variability at one point and not at others?
- Can I see any problems when I fit the model to THIS data?

- Statisticians use the word “error” differently than everyone else.
- You know that you will not have perfect prediction. Instead, you will be off. That is error. It does not mean somebody made a mistake! It just means you can’t make a perfect prediction.
- Specifying how far you will be off is the fun and interesting part of statistics. The rest is just math.

Outcome = baseline + predictor + predictor + error

a number

drawn

from a

bell shaped

distribution

some

number

Impact being a

smoker

Impact of

time

Baby Weight

Weeks * a number

a number

- Hopefully you will see that, given any specific predictor value, your guessed values for the outcome will be close to the values you actually observe in the outcome. Also, any observed outcome values that stray too far from your guess are unlikely.
- That pattern of how far off your guesses are from your observed data can frequently be described by a bell-shaped (“normal”) histogram. So, if you measure errors between your prediction and the observed outcomes, the distribution should be “normal.”

Histogram of actual weights at 40 week births

5.5 lbs

9.5 lbs

My model guesses

7.5 lbs

Histogram of errors at 40 weeks

I guessed way too high rarely

I guessed way too low rarely

0 error if child was 7.5 lbs

Most errors are off by just a bit

- There are some kinds of errors that you will be unwilling to accept.
- If I want to predict the number of times an evil lackey proposes marriage to a mad scientist, I will not accept a negative number!
- If I am predicting the chance of someone developing cancer, I will not accept a number less than 0% or greater than 100%.
- Specifying the type of errors is a critical part of building a model.

- In addition to specifying the range of legal values, another critical component is specifying the variability in the errors.
- If your data is constrained to lie between 0 and 1, what is the variability like?
- If the average is about .5, then you can have scores that are both above and below that and the variability drawn as a histogram may be well described by a bell shaped curve. What is the variability like if the average is about .95? Now the whole right side of your curve is hanging off the side of your page!

- If you have count data (e.g., number of cancer cells), your variability increases with the average count.

If you pretend your variability is normally distributed but your outcome has a limited range, you clearly have problems.

0

1

.5

0

1

.73

0

1

.9

In theory, the variance of count data increases with the mean.

- Perhaps the easiest models to draw and understand are ones where you have a continuous outcome like weight and a continuous predictor like time.
- The model is just a line….
- Y = mX + b
Weight = estimated weight gain each week after conception * number of weeks + weight at 0 weeks

- All models are wrong.
- Your data is sacred (after you remove the pregnant men) and you fit models to the data. You do not fit data to a model. That difference is not a semantic minor detail.

- Sometimes you have data points that are not well fit by the model. Go to extreme measures to document those points. If the data is not a true error, then run the analysis with it and without it. Include the point(s) in all your plots with a special symbol and if one person changes your inferences, consider excluding them.
- You may have different subgroups that you have not identified yet.

Induced because of HUGE size

- A critical step in examining the quality of a model is graphically looking at the residuals.
- Residuals are the differences between the estimated values and the observed values for each person/critter/observation.
- Look for curves, changing variability across the range of values or changes over time.

From Crawley: Statistical Computing

- When you have multiple predictors in a model, you can ask for residuals where you have “controlled for” or “removed the effect of” the other factors.
- Evaluate software packages on their ability to produce these graphics. They are completely missing from Excel, they are not built into SAS (but can be done with minor work), but they are trivially easily with R or S-plus.

- Linear models can model curves
- The math is not too bad….

- You can use explicit mathematical formulas. If you see curves in your residuals, you can use things like:
- Polynomials or inverse polynomials
- Exponentials
- Power functions

S-shaped curve

From Crawley Statistical Computing

- Often the formulas to describe your data are extraordinarily complicated and you want to use non-linear or non-parametric modeling instead.
- Key words you will see include:
- Non-parametric smoothing
- Lowess regression
- Spine regression

- GAM
- Tree models

- Non-parametric smoothing

- What happens when you fit a straight linear model to curvilinear data?

residual

Is this better than a flat line at the mean?

- A tiny p-value does not mean a good model!
- Where on the output does it tell that this is a good or a poor model?

Flatten the line, then look up and down to see if you are systematically off.

- You can build a model that has a curve using a polynomial… the degree of the polynomial determines how many “bends” appear in a curve. So a 2nd degree polynomial would use x and x2 while a 3rd degree polynomial would use x and x2 and x3. These squared or cubed values don’t do anything especially complicated. They are just like adding new variables.

size = intercept + X * something + X2*something else

size = intercept +

X * something +

X2* something else +

X3 * another thing

poly2 = lm(y~poly(x,2))

poly3 = lm(y~poly(x,3))

- Choosing where to stop adding terms to a model is as much an art as a science. You can do comparisons between the models and ask to see if it is a statistically significant difference.

- There are systems for penalizing your model as you add more and more factors to a model like AIC.

- You will eventually move out of the realm of predicting continuous outcomes with normal error. When you do, you will move into the realm of Generalized Linear Models (GLM).
- You want to have a linear model predicting an outcome where you restrict the possible outcome values (e.g., only allow values between 0 and 1) and deal with errors not being consistently normal across the entire range.
- You can change (transform) your outcome and model this with just another linear model similar to what I have shown.

- If you are predicting the number of bacteria you see in a Petri dish, you can not possibly see a negative number of bacteria. A GLM model can be written so that your predicted values can not be negative.
- Contrast this with the baby weight example where with a bit of bad data for your predictor value, you could have the formula spit out a negative weight or a baby weighing a ton.

- Instead of modeling like this:
Outcome =

baseline + predictor + predictor + error

- You can model with GLM like this:
Tweaked outcome =

baseline + predictor + predictor + not normalerror

normal/bell-shaped

- So, the ordinary least squares regression models that I have shown are really just a case of GLM. In these cases I specify that the tweak to the outcome is to just make the outcome identical to what it was originally and the error is normal.
- The tweak to the outcome is called the link and this case the link is called identity.

- The tweaks to the outcome are called links:
- Identity link = predicting a continuous outcome (baby weight)
- Log link = if you can’t have negative values
- Logit link = if you have to restrict the range to between 0 and 1
- There are other links.

- Why bother to specify an error structure other than normal?
- Strong skew, kurtosis errors, bounded errors, negative counts

- The shape of the error distribution is not a bell-shaped curve. Rather than worrying about the math to describe those curves, you simply need to know that different types of data have different error structures.
- Normal errors – continuous outcomes
- Poisson errors - counts
- Binomial errors - proportions
- Gamma errors - variation

- If you are not dealing with a continuous outcome, or count data, you will likely have a binary (yes/no scored as 1 or 0) outcome.
- Clearly you need to do some major tweaking to the outcome because linear models, as we have seen, can predict very large and small numbers.
- Also, the variability of a binary outcome is very different from a continuous variable.

- The solution is to specify a link that limits values to be between 0 and 1 (think of the changed outcome as being the probability of being scored 1) and use an error term that behaves well with binary outcomes.
- This is a GLM with a logit link and binomal errors.
- This kind of analysis is so popular most people don’t know it is a GLM. Rather, they know it only as logistic regression.

- The first assignment is on the class website and is due before the start of class Wednesday.
- Print the slides as close as possible to class time.