1 / 32

3 rd Program Face to Face November 15, 2011 Andrew J. Buckler, MS Principal Investigator

3 rd Program Face to Face November 15, 2011 Andrew J. Buckler, MS Principal Investigator. With Funding Support provided by National Institute of Standards and Technology. Value proposition of QI-Bench.

elvina
Download Presentation

3 rd Program Face to Face November 15, 2011 Andrew J. Buckler, MS Principal Investigator

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 3rd Program Face to Face November 15, 2011 Andrew J. Buckler, MS Principal Investigator With Funding Support provided by National Institute of Standards and Technology

  2. Value proposition of QI-Bench • Efficiently collect and exploit evidence establishing standards for optimized quantitative imaging: • Users want confidence in the read-outs • Pharma wants to use them as endpoints • Device/SW companies want to market products that produce them without huge costs • Public wants to trust the decisions that they contribute to • By providing a verification framework to develop precompetitive specifications and support test harnesses to curate and utilize reference data • Doing so as an accessible and open resource facilitates collaboration among diverse stakeholders 2

  3. QI-BenchStructure / Acknowledgements • Prime: BBMSC (Andrew Buckler, Gary Wernsing, Mike Sperling, Matt Ouellette) • Co-Investigators • Kitware (Rick Avila, Patrick Reynolds, Julien Jomier, Mike Grauer) • Stanford (David Paik, Tiffany Ting Liu) • Financial support as well as technical content: NIST (Mary Brady, Alden Dima, Guillaume Radde) • Collaborators / Colleagues / Idea Contributors • FDA (Nick Petrick, Marios Gavrielides) • UCLA (Grace Kim) • UMD (Eliot Siegel, Joe Chen, Ganesh Saiprasad) • VUmc (Otto Hoekstra) • Northwestern (Pat Mongkolwat) • Georgetown (Baris Suzek) • Industry • Pharma: Novartis (Stefan Baumann), Merck (Richard Baumgartner) • Device/Software: Definiens (Maria Athelogou), Claron Technologies (Ingmar Bitter) • Coordinating Programs • RSNA QIBA (e.g., Dan Sullivan, Binsheng Zhao) • Under consideration: CTMM TraIT (Henk Huisman, Jeroen Belien) 3

  4. Core Activities for Biomarker Development Create and Manage Semantic Infrastructure and Linked Data Archives Commercial Sponsor Prepares Device/Test for Market Consortium Establishes Clinical Utility/Efficacy of Putative Biomarker Create and Manage Physical and Digital Reference Objects QI-Bench is Use case Driven Collaborative Activities to Standardize and/or Optimize the Biomarker 4

  5. Commercial Sponsor Prepares Device/Test for Market Create and Manage Physical and Digital Reference Objects Consortium Establishes Clinical Utility/Efficacy of Putative Biomarker Create and Manage Semantic Infrastructure and Linked Data Archives Core Activities for Biomarker Development QISL Quantitative Imaging Specification Language Reference Data Set Manager 3. Batch analysis scripts QIBO Reference Data Sets, Annotations, and Analysis Results Source of clinical study results Collaborative Activities to Standardize and/or Optimize the Biomarker UPICT Protocols, QIBA Profiles, entered with Ruby on Rails web service Batch Analysis Service Clinical Body of Evidence (formatted to enable SDTM and/or other standardized registrations 4. Output QIBO- UPICT Protocols, QIBA Profiles, literature papers and other sources (red edges represent biostatistical generalizability) BatchMake Scripts 5

  6. QISL Quantitative Imaging Specification Language Reference Data Set Manager 3. Batch analysis scripts QIBO Reference Data Sets, Annotations, and Analysis Results Source of clinical study results UPICT Protocols, QIBA Profiles, entered with Ruby on Rails web service Batch Analysis Service Clinical Body of Evidence (formatted to enable SDTM and/or other standardized registrations 4. Output QIBO- UPICT Protocols, QIBA Profiles, literature papers and other sources (red edges represent biostatistical generalizability) BatchMake Scripts 6

  7. QIBO, AIM, RadLex/ Snomed/ NCIt; built using Ruby on Rails. caB2B, NBIA, PODS data elements, DICOM query tools. STDM standard of CDISC into repositories like FDA’s Janus. MIDAS, BatchMake, Condor Grid; built using CakePHP. MVT portion of AVT, re-useable library of R scripts. 7

  8. BSD-2 license • Domain is • www.qi-bench.org. • Landing page provides • Access to prototypes, • Repositories for download and development, • Acknowledgements, • Jiraissue tracking, and • Documentation 8

  9. Project wiki includes sections for • Project management plan, • User needs analysis (including use cases), • Lab Protocol, • Developer’s helps (including use of Git), • Meeting minutes, and • Discussion of investigators/ collaborators. 9

  10. Specify: Specifyis presently a composite of QISL and the AIM template builder. The Quantitative Imaging Specification Language (QISL) uses the Quantitative Imaging Biomarker Ontology (QIBO) and other linked ontologies to develop a triple store based on user Q/A.

  11. Quantitative Imaging Biomarker Ontology (QIBO) • Initial curation to collect terms: reviewed 126 articles across 6 therapeutic areas elaborating 225 imaging markers • Reusing other publicly available ontologies: MeSH, NCI thesaurus, GO, FMA, and BIRNLex • Current sate: 490 classes and relationship properties for clinical context for use and assay methods. • Next steps: Basic Formal Ontology (BFO) as an upper ontology that provides a formal structure of upper level abstract classes that has been adapted by the Open Biological and Biomedical Ontologies (OBO) foundry, a large collaborative effort for the goal of creating orthogonal and interoperable ontologies in biomedical research.

  12. Specify (cont) The idea is that AIM templates would be constructed and linked to the other specification information from the ontologies. Presently it just co-exists in the prototype app, it is not yet functionally integrated as ultimately intended.

  13. FormulateWeb-enabled service for aggregating reference data based on endpoints

  14. Formulate(continued) • One small part of Formulate that we have done is to create a CQL “connecter” to import data from NBIA. The reason we do this is to optimize storage for grid computing and to include metadata storage needed to run experiments.

  15. Endpoints for Formulate

  16. Executeis where analyses of Reference Data Sets take place. It is based on MIDAS and the associated Batchmake” capability but extends it for QI-Bench. The storage model is optimized for metadata storage and grid computing.

  17. Execute First Reference Data Sets • Pilot 3A • 156 lesions for evaluation (1A read 15) • Pivotal 3A • 408 lesions for evaluation (1A read 40) • Study 1C • 2364 lesions for evaluation (1C is set to read 66) • Study 1187 • 7122 lesions for evaluation • Available: RIDER, IDRI, MSKCC “1B”, …

  18. Execute roadmap • Script to write “Image Formation” content into Biomarker Database for provenance of Reference Data Sets: Application for pulling in data from the “Image Formation” schema to populate the biomarker database. This data will originate from the DICOM imagery imported into QI-Bench. • Laboratory Protocol for the NBIA Connector and Batch Analysis Service: Laboratory protocol to describe the use of the NBIA Connector and the Image Formation script to import data into QIBench and use of the Batch Analysis Service for server-side processing. • Support change analysis biomarkers serial studies (up to two time points in the current period, extensible to additional in subsequent development iterations): Support experiments including at minimum two time points. An example of this is the change in volume or SUV, rather than (only) estimation of the value at one time point. • Document and solidify the API harness for execution modules of the Batch Analysis Service: This task will include the documentation and complete specification of the Batchmake Application API. • Support scripted reader studies: Support reader studies through worklist items specified via AIM templates as well as Query/Retrieve via DICOM standards for interaction with reader stations. ClearCanvas will serve as the target reader station for the first implementation. • Generate output from the LSTK module via AIM template (as opposed to hard-coded): Generate annotation and image markup output from reference algorithms (i.e., LSTK for volumetric CT and Slicer3D for SUV) based on AIM templates instead of the current hard-coded implementation. An AIM Template is an .xsd file. • Re-work the NBIA Connector to run in the context of Formulate: This task will include refactoring and stabilization of the NBIA Connector in order to incorporate its functionality into Formulate. 

  19. AnalyzeCurrent Prototype Capabilities

  20. AnalyzeMVT provides good framework, but with gaps • Lesion tracking • Other modalities and measures, e.g., SUV via FDG-PET • Properly functioning multiple regression and N-way ANOVA • Support Clinical Performance assessment (i.e., in addition to current Technical Performance) • Outcome studies • Integrated genomic/proteomic correlation studies • Group studies for biomarker qualification • Serial studies / change analysis • Persistent database • Scale-up to handle thousands of cases (10’s thousands of lesions) • Deploy as Web app

  21. Analyze Figures of Merit and Descriptive Statistics • Collaborative activity underway to converge definitive descriptive statistics for technical and clinical performance • Approaches to defining and administering compliance in relationship with QIBA profiles.

  22. PackageStructure submissions according to eCTD, HL7 RCRIM, and SDTM Section 2Summaries 2.1.Biomarker QualificationOverview 2.1.1.Introduction 2.1.2.Contextof Use 2.1.3.SummaryofMethodology and Results 2.1.4.Conclusion 2.2.Nonclinical Technical Methods Data 2.2.1.Summaryof Technical Validation Studies and Associated Analytical Methods 2.2.2.Synopsesof individual studies 2.3.Clinical Biomarker Data 2.3.1.Summaryof Biomarker Efficacy Studies and Associated Analytical Methods 2.3.2.Summaryof Clinical Efficacy [oneforeachclinicalcontext] 2.3.3.Synopsesof individual studies Section 3Quality <usedwhen individual sponsorqualifiesmarker in contextof a specific NDA> Section 4Nonclinical Reports 4.1.Study reports 4.1.1.Technical Methods Development Reports 4.1.2.Technical Methods Validation Reports 4.1.3.Nonclinical Study Reports (in vivo) 4.2.Literature references Section 5Clinical Reports 5.1.Tabularlistingof all clinicalstudies 5.2.Clinical studyreports and relatedinformation 5.2.1.Technical Methods Development reports 5.2.2.Technical Methods Validation reports 5.2.3.Clinical Efficacy Study Reports [contextfor use] 5.3.Literature references

  23. PackageStandards Mapping NBIA DE ACRIN Reference Point to NCIt, RadLex, etc. here CDASH/SDTM Variables DICOM Tag SDTM Role ISO8601 & 21090

  24. PackageNCI – CPATH – CDISC CRF WG

  25. PackageWeb-enabled service for compiling results

  26. Open source model • www.qi-bench.org domain • BSD license • Extending rather than forking assets • Engaging with CBIIT OSDI program • QI-Bench specific assets in publicly accessible repositories and full access to development tools through www.qi-bench.org • Project wiki at www.qi-bench.org/wiki 26

  27. Development Lifecycle Process for Centrally Developed Portions High-level relationship among development processes (modeled after corresponding process flow at caBIG to allow effective integration) 27

  28. QI-Bench Developer’s Resources 28 28

  29. 29

  30. Year Outlook 30

  31. Summary:QI-Bench Contributions • We make it practical to increase the magnitude of data for increased statistical significance. • We provide practical means to grapple with massive data sets. • We address the problem of efficient use of resources to assess limits of generalizability. • We make formal specification accessible to diverse groups of experts that are not skilled or interested in knowledge engineering. • We map both medical as well as technical domain expertise into representations well suited to emerging capabilities of the semantic web. • We enable a mechanism to assess compliance with standards or requirements within specific contexts for use. • We take a “toolbox” approach to statistical analysis. • We provide the capability in a manner which is accessible to varying levels of collaborative models, from individual companies or institutions to larger consortia or public-private partnerships to fully open public access. 31

  32. 32

More Related