1 / 20

June 2012 Update June 14, 2012 Andrew J. Buckler, MS Principal Investigator, QI-Bench

June 2012 Update June 14, 2012 Andrew J. Buckler, MS Principal Investigator, QI-Bench. With Funding Support provided by National Institute of Standards and Technology. Agenda for Today. Progress on folder organization, naming conventions, and ISA roll-ups (Gary)

azize
Download Presentation

June 2012 Update June 14, 2012 Andrew J. Buckler, MS Principal Investigator, QI-Bench

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. June 2012 Update June 14, 2012 Andrew J. Buckler, MS Principal Investigator, QI-Bench With Funding Support provided by National Institute of Standards and Technology

  2. Agenda for Today • Progress on folder organization, naming conventions, and ISA roll-ups (Gary) • Demonstration of multiple time point batch analysis capability (Mike) • Progress on automated statistical analysis (Kjell) 2 2 2 2 2 2 2

  3. First Generation: 3A Challenge • Created newly named data set upon request so that all dicom files had an extension of dcm. • These files follow our 1st generation standard for naming file layout and results. • Meeting program needs; but from system point of view, essentially no automation. 3

  4. Infrastructure progress for next generation: • Created naming conventions and file formats for roll up files. • Created scripts for generating roll up files for location and change data. 4

  5. Example: Reference Data Sets for QIBA compliance • Our most fully 2nd generation compliant reference data sets. • Totality of data from DICOM to analysis is included. • Study data, in ISA-TAB (like) structure is available for analysis. 5

  6. Multi-timepoint batch analysis demo

  7. Demonstration of automated analysis using R controlled by Iterate. Iterate workflow for running an R script 7 7 7 7 7 7

  8. Test/Re-test Study • Test/Re-test is one of the building-block study designs for understanding a measurement system • Helps us to understand the variation we see in the system under the “no change” condition • Fundamental components of the analysis • Scatterplot of test values versus retest values • Mean versus difference of test/re-test values (i.e. Bland-Altman) • Intraclass correlation coefficient (ICC): • variation due to the subjects (class) relative to the unexplainable variation. • Minimum detectable change (MDC): • minimum change considered to be statistically significant 8 8 8 8 8

  9. Test/Re-test Example • CT Volumetry data • 32 subjects in “coffee break” design • 2 subjects excluded as uncomputable • Analysis of volume calculations • Results can be generated for any study that follows the Test/Re-test format 9 9 9 9 9

  10. value1 value2 2 7.554111 8.097028 4 771.499954 654.255796 6 22099.090961 22461.168856 8 45558.404866 45411.064843 10 8.713604 7.796490 12 7824.517733 8166.324725 14 31671.844931 29196.968703 16 12.363709 13.833785 18 19.949875 19.919854 20 11.560880 6.167714 22 37485.766057 32506.892712 24 2.451236 2.451239 26 33261.732159 12771.468560 28 7720.802020 11.456613 30 18012.668569 18377.257512 32 8725.320453 8880.539000 34 7.780063 8.649389 36 16.691373 56622.555767 38 8.241816 6070.071272 40 16.342152 10.602244 42 13.167784 12.483209 44 6593.866812 6193.351837 46 6.043523 6724.990507 48 5.826424 4.818514 50 6683.346163 7479.948680 52 15.816401 18.159363 54 7432.544980 7393.723543 56 6.786277 6.786281 58 26144.753408 31253.571023 60 717.220963 745.647667 Volume Difference 10 10 10 10 10

  11. 11 11 11 11 11

  12. 12 12 12 12 12

  13. Volume Calculation N Mean Standard Deviation Min Median Max 1 ProportionalChange 30 3.51 38.66 -99.70 0 99.94 2 VolumeDifference 30 1339.28 11401.51 -20490.26 0 56605.86 Volume Summary 13 13 13 13 13

  14. Intraclass Correlation for Test-retest: 0.68 Minimum Detectable Change : 20835.38 Intraclass Correlation 14 14 14 14 14

  15. Other Building-Block Studies(In Process) • Inter- and intra-reader variability • Studies of bias and variation when truth is known via phantoms • Studies for understanding the variation due to standard, uncontrolled factors (e.g. site, machine, etc.) 15 15 15 15 15

  16. Formal Deployment: QIBA/RIC instance • We are well on the way to having a “third party” installation of Execute. • These reference data are largely in our 2nd generation format for naming and layout. • Whereas CT volumetry has been (and continues to be) the focus of the test bed, the first use of the RSNA instance is DCE-MRI. 16

  17. 17

  18. Value proposition of QI-Bench • Efficiently collect and exploit evidence establishing standards for optimized quantitative imaging: • Users want confidence in the read-outs • Pharma wants to use them as endpoints • Device/SW companies want to market products that produce them without huge costs • Public wants to trust the decisions that they contribute to • By providing a verification framework to develop precompetitive specifications and support test harnesses to curate and utilize reference data • Doing so as an accessible and open resource facilitates collaboration among diverse stakeholders 18

  19. Summary:QI-Bench Contributions • We make it practical to increase the magnitude of data for increased statistical significance. • We provide practical means to grapple with massive data sets. • We address the problem of efficient use of resources to assess limits of generalizability. • We make formal specification accessible to diverse groups of experts that are not skilled or interested in knowledge engineering. • We map both medical as well as technical domain expertise into representations well suited to emerging capabilities of the semantic web. • We enable a mechanism to assess compliance with standards or requirements within specific contexts for use. • We take a “toolbox” approach to statistical analysis. • We provide the capability in a manner which is accessible to varying levels of collaborative models, from individual companies or institutions to larger consortia or public-private partnerships to fully open public access. 19

  20. QI-BenchStructure / Acknowledgements • Prime: BBMSC (Andrew Buckler, Gary Wernsing, Mike Sperling, Matt Ouellette) • Co-Investigators • Kitware (Rick Avila, Patrick Reynolds, JulienJomier, Mike Grauer) • Stanford (David Paik) • Financial support as well as technical content: NIST (Mary Brady, Alden Dima, John Lu) • Collaborators / Colleagues / Idea Contributors • Georgetown (Baris Suzek) • FDA (Nick Petrick, Marios Gavrielides) • UMD (Eliot Siegel, Joe Chen, Ganesh Saiprasad, Yelena Yesha) • Northwestern (Pat Mongkolwat) • UCLA (Grace Kim) • VUmc (Otto Hoekstra) • Industry • Pharma: Novartis (Stefan Baumann), Merck (Richard Baumgartner) • Device/Software: Definiens, Median, Intio, GE, Siemens, Mevis, Claron Technologies, … • Coordinating Programs • RSNA QIBA (e.g., Dan Sullivan, Binsheng Zhao) • Under consideration: CTMM TraIT (Andre Dekker, JeroenBelien) 20

More Related