Overview Demonstrate how to use different Dashboard Reports for Evaluation Demonstrate new Custom Reports Agenda Introductions Reports Questions
What are the New Reports? Performance Management Dashboard Interface (Dashboards) Custom Dashboard Reports (Custom Reports) SAS/SPSS
Click on any graph to see a report with additional information for that measure.
[Your Grant Number and Name will be here] Blue line is your grant. Gold line is comparison (GFA)
Table will show follow-ups due, follow-ups received, and follow-up rate by month and year
For each outcome, data table shows number, percent at intake, percent at 6 month, and the rate of change.
Dashboard and Custom Reports Navigation Tips (cont.) • View reports in HTML, PDF, EXCEL or XML by clicking the Earth icon in the upper right corner of the report screen. After clicking a view, choose HTML to return to the Dashboard display • Dashboards only • Change the comparisons in each display by clicking the radio buttons in the list to the left of the graphic display • Dashboards include only intakes matched with 6 month follow-up interviews (except Intake Target Report)
For specific date rage, click “Specific Date Range” button and then click “Apply” button
To run a custom age range, choose “Specific Age Range” and click the “Apply” button Use Ctrl button to choose multiple options within a demographic group
Breadcrumb heading allows you to see what selections you have made on previous pages Run report by clients who were homeless at intake
To save report, click “Keep this version”, then click “Save as Report View”
To save report, choose “Select My Folders” and type the name of your report. Then click “Ok”.
External Data Sources • Help contexualize performance • Understand trends in factors that influence access/capacity/treatment (e.g., area unemployment, drug epidemics) • National Survey on Drug Use and Health, Treatment Episode Data Set, Others • Benchmarking facilitate comparisons • Similar organizations based on population, resources, and community
Evaluation Analyses: Bringing It All Together • Preliminary Analyses • Missing Data • Trends • Process and Outcome Analyses • Process: Identify areas of success/improvement • Outcome: Client outcomes • Special Analytic Topics • Analysis of Change over time
Preliminary Analyses • Missing Data • Attrition/Drop Out • Item Missingness • Problem: Those who are retained may be different than those who drop/out or fail t answer questions
Alphabet Soup • MCAR • MAR • NMAR • Ignorable/non-ignorable • GEE • RC/RE • HLM/MLM • PMM • DID
Missing Completely At Random (MCAR) • Likelihood of missing data (item or assessment) are due to “chance” factors. Missingness unrelated to any specific survey item (observable) • Sick on day of test administration (missed assessment) • Miss item on survey because not “paying attention” • No differences on any factor that is not observed (unobservable) in the study, but is related to study outcome (e.g., no differences in the likelihood of being sick between smokers and non-smokers) • Ignorable in that results will not be biased
Missing At Random (MAR) • Likelihood of missingness (item or assessment) is due to a respondent characteristic(s) collected in study. • Smokers are more likely to be absent on day of data collection, and information on smoking status is collected. • People who smoke are less likely to report income, and smokers have lower incomes, in general. Smoking status is collected in study. • Analyses will be biased unless appropriate procedures are used, perhaps including information on smoking. • Non-ignorable, unless appropriate procedures are in place.
Not Missing at Random (NMAR) • Likelihood of or assessment) are due to factors not observed in study. • Smokers more likely to be missing on income variable and smokers have lower income. But do not have information on smoking. • Unverifiable assumption. Can distinguish MCAR and MAR by whether a factor is related to an observed covariate. Cannot distinguish between MCAR and NMAR because it is purely a hypothesis as to why data are missing. • Two procedures: Selection Models and Pattern Mixture Models • Selection: Model the joint response of outcome and predictor • Pattern Mixture: Average effect based on unique pattern of missingness
Item-Level Missing Data • Data on respondents, but items missing within survey • Typically ignored in practice • Exclude data • Reduce likelihood of detecting significant effects • Bias results depending upon the nature of missing data
Missing Data in Longitudinal Studies • Missing: Drop-out at any given wave • Pattern: Structure of Drop-out • Monotonic Missing Data: • A data row of variables Y1, Y2, ..., Yp (in that order) is said to have a monotone missing pattern when the event that a variable Yj is missing for a particular individual implies that all subsequent variables Yk, k > j, are missing for that individual. Alternatively, when a variable Yj is observed for a particular individual, it is assumed that all previous variables Yk, k < j, are also observed for that individual (defines attriter)
Understanding Drop-Out • Drop-out: Only presenting data on clients who were successfully followed up may cloud interpretation of data • Hard-to-Treat cases often drop-out • If near 100% follow-up rate, then more confidence in follow-up data • Identifying cases lost-to follow-up can help targeting/recruitment efforts • Need to understanding “coding” of drop-out (administrative discharge)
Missing Data Patterns for 3 Observations value pattern 1='0 0 0' 2='M 0 0' 3='0 M 0' 4='M M 0' 5='0 0 M' 6='M 0 M' 7='0 M M' 8='M MM'; value mono 0='complete data' 1='drop after baseline' 2='drop after eot' 3='drop after 2m' 4='missing data at any other
Types of Analyses for Drop-Out • Identify patterns of drop-out • Understand assumptions for methods used to analyze longitudinal data • GEE: MCAR • RE: MAR • REGRESSION:/ANOVA MCAR • Determine which variables may be related to missingness and attrition (monotonic) • Conduct sensitivity (stability) analyses under different assumptions