1 / 40

Using a Statewide Evaluation Tool for Child Outcomes & Program Improvement

Using a Statewide Evaluation Tool for Child Outcomes & Program Improvement. Terry Harrison, Part C Coordinator Susan Evans, Autism Project Specialist New Jersey Early Intervention System NJ Department of Health and Senior Services. A look at New Jersey Part C. NJ has 21 counties

keffie
Download Presentation

Using a Statewide Evaluation Tool for Child Outcomes & Program Improvement

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using a Statewide Evaluation Tool for Child Outcomes & Program Improvement Terry Harrison, Part C Coordinator Susan Evans, Autism Project Specialist New Jersey Early Intervention System NJ Department of Health and Senior Services

  2. A look at New Jersey Part C • NJ has 21 counties • Each county has at least one dedicated Targeted Evaluation Team (TET). All eligibility evaluations are done by the TETs. • Evaluators administer a standardized tool for all children at entry and a percentage of children at exit to answer OSEP Outcome questions 3.A, 3.B, and 3.C

  3. Battelle Developmental Inventory 2nd edition • Chosen based on following criteria: • Commercially available • Domains answer Child Outcome questions • Reliable and valid • Can be administered by NJEIS evaluators • Norm referenced • Can be used to help determine eligibility • Can be used for Part C and 619

  4. Exit Plan • 5 -6 counties each year over 4 years conduct exit evaluations when children leave the system. • To be assessed on exit a child has to: • Have an intake BDI-2 • Be in the system for at least 6 months • Reside in a county doing exit evaluations • NJ reported exit data in APR 2008 for 63 children

  5. OSEP APR Reporting

  6. For APR indicators 3.B and 3.C NJEIS makes decisions based on two BDI2 domains Reporting Decisions

  7. Standard Score • NJEIS uses BDI-2 derived Standard Scores by domain for the basis of reporting • The Standard Score represents the child’s development in relation to children in the same age group • Mean = 100, Sd = 15

  8. Standard Score • Scores of 90 to 100 are considered as “average”, • Scores between 80 and 89 considered as “low average”. • Scores below 80 indicate “mild to more severe developmental delay”

  9. Same age peers • NJEIS considers children as functioning with same age peers when their standard score in each domain is 80 or greater. • Children have to be in the “low average” group or higher.

  10. Initial and Exit Scores NJEIS is using four BDI-2 data elements from each domain to “calculate” a cross walk to OSEP a, b, c, d, e • Initial Raw – is the raw score at entry • Initial Standard – is the standard score at entry • Exit Raw – is the raw score at exit • Exit Standard – is the raw score at exit

  11. Reporting Categories • Assignment to a, b, c is evaluated independent from d, e • For 3.B & 3.C the assignment to a, b, and c will be based on the maximum little score assigned to a domain in each indicator. (i.e. a is less then b) • In the case of 3.A the score for the one domain will be reported

  12. Business Rules a, b, c Report in “c” Percentage of children who improved functioning to a level nearer to same aged peers but did no reach it. Exiting Raw > Initial Raw ANDExiting Standard > Initial Standard

  13. Business Rules a, b, c • Percentage of children who did not improve functioningExiting Raw =< Initial Raw ANDExiting Standard < 80 • Percentage of children who improved functioning, but not sufficient to move nearer to functioning comparable to same-aged peers.Exiting Raw > Initial Raw ANDExiting Standard <= Initial Standard AND Exiting Standard < 80

  14. Example Outcome 3.Bcategory c • Percentage of children who improved functioning to a level nearer to same aged peers but did no reach it. Entry Exit < Cognitive Domain Raw = 25 Standard = 55 Cognitive Domain Raw = 49 Standard = 61 Raw and Standard score increase; however exiting standard below 80. Therefore, little c. < Communication Domain Raw = 33 Standard = 64 Communication Domain Raw = 57 Standard = 71

  15. Business Rules d, e • Percentage of children who improved functioning to reach a level comparable to same aged peers. Initial Standard < 80 AND Exiting Standard >= 80 • Percentage of children who maintained functioning at a level comparable to same-aged peers.Initial Standard >= 80 AND Exiting Standard >= 80

  16. Business Rules d, e • Only be assigned to d, or e if both domains indicate that the child is comparable to same aged peer • If only one of two domains is comparable to same aged peer report in c • If one domain is in dand another falls in e then the child will be assigned to d

  17. Example Outcome 3.Ccategory d • Percentage of children who improved functioning to reach a level comparable to same aged peers. Entry Exit < Adaptive Domain Raw = 33 Standard = 76 Adaptive Domain Raw = 44 Standard = 87 Initial Standard score below 80. Therefore, little d. < Motor Domain Raw = 96 Standard = 86 Motor Domain Raw = 118 Standard = 102 Initial Standard score below 80. Therefore, little e. Child is reported in little d because the lower little scores is used.

  18. Exit 2008 – Outcome #3.B Knowledge and skills

  19. Exit 2008 – Outcome #3.B Behaviors to meet needs

  20. Exit 2008 – Outcome #3.A Social Skills

  21. Applying Technology

  22. Part C & BDI-2 • Each evaluator uses a palm pilot which contains the full BDI-2 • Results: • Scoring errors are minimized • Evaluators synch the palm to the web • Agencies have access to reports at local level

  23. Web-based Data System • Lead agency has access to individual and agency data via the web-based data system • Lead agency uses the web-based data system to export data for federal reporting • Data is also used by lead agency for: • Procedural Safeguards Contacts • Program compliance with child outcomes project • Quality control of evaluators via desk audits

  24. General Supervision

  25. Data NJEIS has started to use BDI-2 data as part of its general supervision and monitoring system Monitoring: Appropriateness of IFSP services based on initial evaluation Eligibility decisions Evaluator qualifications and quality assurance

  26. General Supervision:Appropriate Services • NJEIS charted children whose eligibility evaluation showed more that 1.5 Sd below the mean. • Compared this data to authorized service hours based on IFSPs. • This data raises questions related to appropriate type and intensity of service decisions made by IFSP teams.

  27. Next Steps Appropriate Services • Compare the areas of need (by domains & sub-domains identified by the BDI-2 more than 1.5 Sd below) with type, frequency and intensity of services identified on the IFSP • Monitor appropriate justification of IFSP Team service decisions. • Provide Training & Technical Assistance

  28. General Supervision:Eligibility Decisions • NJEIS teams use BDI-2 as part of the eligibility decision process • First time state-wide use of same instrument as part of the eligibility process • Other tools are completed as needed

  29. Next Steps: Eligibility • Pending Part C final regulations, NJ is considering implementing the screener portion of the BDI-2

  30. General Supervision:Evaluators • Use of statewide tool & subsequent training activities identified the need to establish minimum standards for qualified NJEIS evaluators. • The lead agency surveyed TET agencies regarding personnel criteria for their evaluators.

  31. Survey Results • 16 TET agencies responded • 6 agencies had specific “evaluator” job descriptions • The remaining agencies reported having job descriptions related to each discipline that also included evaluation as a job duty

  32. Survey Results • Agency Requirement of EI Experience • 6 - require 2+ years • 4 - require 1 year • 1 - requires 400+ hours in EI • 1 - required 1 year for a licensed professional and 2+ years for other disciplines • 4 - had no requirements

  33. Survey Results • Most of TET agencies do not require coursework or training in evaluation. • Mentoring Plan • 4 have no mentoring plan • 7 have procedures for mentoring or pairing with experienced evaluators • 6 did not have any plans specific to being an evaluator

  34. Next Steps:Evaluators • Review standard personnel criteria for evaluators established in other states • Develop NJ standards • Challenges: • Quantifying competencies for hiring and monitoring • Recruitment • Should the state consider “grandfathering” of current evaluators?

  35. Child Outcome Costs

  36. Implementation Costs • DHSS supplied all training and materials to agencies, including technology component. Cost over three years: • First year $107,165 • Second year $151, 975 • Third year $ 48,210 • Totals $ 307,350

  37. Training/Evaluations • To date236 evaluators & program staff have been trained. • Average time of eligibility evaluation has increased by 15 minutes. • Factors for increase include: • Learning curve for new evaluators • Use of technology • Use of additional tools in areas where more information is needed

  38. Weighing the costs • Each evaluator one time start-up cost has been approximately $1,300 (materials & training) • Additional evaluation time (15 min * 2 evaluators) cost increase averaged $50.00 per eval. • To implement COSF or a similar procedure the projected cost is: • $100 per staff, per hour, to review & note progress on each form for each child included in Child Outcome Reporting

  39. Thank you

More Related