1 / 26

Monthly Program Update March 8, 2012 Andrew J. Buckler, MS Principal Investigator

Monthly Program Update March 8, 2012 Andrew J. Buckler, MS Principal Investigator. With Funding Support provided by National Institute of Standards and Technology. Agenda. Summary and close -out of the „Winter 2012“ development iteration

juliet
Download Presentation

Monthly Program Update March 8, 2012 Andrew J. Buckler, MS Principal Investigator

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Monthly Program Update March 8, 2012 Andrew J. Buckler, MS Principal Investigator With Funding Support provided by National Institute of Standards and Technology

  2. Agenda • Summaryandclose-out ofthe „Winter 2012“ developmentiteration • Coveringwhat’sbeenaccomplishedfrom multiple pointsofview • Preview of „Spring 2012“ developmentiteration • Withfocus on directions in StudyDescriptionand „ISA“ storage model, evaluationofworkflowengine. 2

  3. Autumn 2011(n=19) Winter 2012(n=47) 3A Pilot • Ramp-up of formal development environment, (including issue tracking) • Initial Specify (including QIBO and Knowledgebase) • Major update to Execute: • Metadata extraction • Better Batchmake GUI • Initial Formulate • Specify now creates instances in knowledgebase Spring 2012(n=32) Unstaged(n=19) 3A Pivotal Analyze project • Change studies • Scripted reader studies • Export to Analyze • Import from Formulate • Evaluate workflow application “Iterate” • Clojure DSL for executable specifications • Major update to Analyze • RDF-compliance in Specify • Formulate using SPARQL • Service APIs for Iterate ISA Storage Model Specify/ Formulate project 3 3 3

  4. User: Lab Protocol Developer: Design Documents User Needs and Requirements Analysis Architecture Application-specific Design Specify "Specify" Scope Description (ASD) "Specify" Architecture Specification (AAS) "Quantitative Imaging Biomarker Ontology (QIBO)" Software Design Document (SDD) "Biomarker DB" (a.k.a., the triple store) Software Design Document (SDD) AIM Template Builder Design Documentation: Formulate "Formulate" Scope Description (ASD) "Formulate" Architecture Specification (AAS) "NBIA Connector" Software Design Document (SDD) Execute "Execute" Scope Description (ASD) "Execute" Architecture Specification (AAS) Reference Data Set Manager (RDSM) Software Design Document (SDD) Batch Analysis Service Software Design Document (SDD) Analyze "Analyze" Scope Description (ASD) "Analyze" Architecture Specification (AAS) Package "Package" Scope Description (ASD) "Package" Architecture Specification (AAS) • Develop and run queries based on data requirements • Use of Formulate • Load Reference Data into the Reference Data Set Manager • Example Pilot3A Data Processing Steps • Server-Side Processing using the Batch Analysis Service • Package Algorithm or Method using Batch Analysis Service API • Prepare Data Set • Create Ground Truth or other Reference Annotation and Markup • Importing location points and other data for use • Writing Scripts • Initiate a Batch Analysis Run • Perform statistical analysis • Analyze Use Instructions 4 4 4

  5. Primary First Participants Secondary 3A Challenge Series • GE Healthcare • Icon Medical Imaging • Columbia University • INTIO, Inc. • Vital Images, Inc. • Median Technologies • Vital Images, Inc. • FraunhoferMevis • Siemens • Moffitt Cancer Center • Toshiba Investigation n Investigation 1 Investigation Investigation Pilot Pilot Pilot Pilot (Form of) Early Analysis Results • Defined set of data • Defined challenge • Defined test set policy Pivotal Pivotal Pivotal Pivotal Train Train Train Train Test Test Test Test 5 5 5

  6. Standardized Representation of Quantitative Imaging Statistical Validation Services for Quantitative Imaging 6 6 6 6

  7. OK. Now into the details for Spring 2012 Iteration: Starting with what we said in January… RDF Triple Store obtained_by CT Volumetry CT used_for measure_of Therapeutic Efficacy Tumor growth Specify Execute Formulate Feedback Reference Data Sets Analyze QIBO Feedback Y=β0..n+β1(QIB)+β2T+ eij 7 7 7

  8. …and where we left off in February… // Business Requirements FNIH, QIBA, and C-Path participants don’t have a way to provide precise specification for context for use and applicable assay methods (to allow semantic labeling): BiomarkerDB = Specify (biomarker domain expertise, ontology for labeling); Researchers and consortia don’t have an ability to exploit existing data resources with high precision and recall: ReferenceDataSet+ = Formulate (BiomarkerDB, {DataService} ); Technology developers and contract research organizations don’t have a way to do large-scale quantitative runs: ReferenceDataSet .CollectedValue+ = Execute (ReferenceDataSet.RawData); The community lacks way to apply definitive statistical analyses of annotation and image markup over specified context for use: BiomarkerDB.SummaryStatistic+ = Analyze ( { ReferenceDataSet .CollectedValue}); Industry lacks standardized ways to report and submit data electronically: efiling transactions+ = Package (BiomarkerDB, {ReferenceDataSet} ); 8 8 8 8 8

  9. Rotate it to align with the horizontal rather than vertical presentation of our splash screen… // Business Requirements FNIH, QIBA, and C-Path participants don’t have a way to provide precise specification for context for use and applicable assay methods (to allow semantic labeling): BiomarkerDB = Specify (biomarker domain expertise, ontology for labeling); Researchers and consortia don’t have an ability to exploit existing data resources with high precision and recall: ReferenceDataSet+ = Formulate (BiomarkerDB, {DataService} ); Technology developers and contract research organizations don’t have a way to do large-scale quantitative runs: ReferenceDataSet .CollectedValue+ = Execute (ReferenceDataSet.RawData); The community lacks way to apply definitive statistical analyses of annotation and image markup over specified context for use: BiomarkerDB.SummaryStatistic+ = Analyze ( { ReferenceDataSet .CollectedValue } ); Industry lacks standardized ways to report and submit data electronically: efiling transactions+ = Package (BiomarkerDB, {ReferenceDataSet} ); 9 9 9 9 9

  10. …to arrive at a new more complete view(interpreting the braces as a separate application) // Business Requirements FNIH, QIBA, and C-Path participants don’t have a way to provide precise specification for context for use and applicable assay methods (to allow semantic labeling): BiomarkerDB = Specify (biomarker domain expertise, ontology for labeling); Researchers and consortia don’t have an ability to exploit existing data resources with high precision and recall: ReferenceDataSet+ = Formulate (BiomarkerDB, {DataService} ); Technology developers and contract research organizations don’t have a way to do large-scale quantitative runs: ReferenceDataSet .CollectedValue+ = Execute (ReferenceDataSet.RawData); The community lacks way to apply definitive statistical analyses of annotation and image markup over specified context for use: BiomarkerDB.SummaryStatistic+ = Analyze ( { ReferenceDataSet .CollectedValue } ); Industry lacks standardized ways to report and submit data electronically: efiling transactions+ = Package (BiomarkerDB, {ReferenceDataSet} ); 10 10 10 10 10

  11. Worked Example (starting from claim analysis we discussed in February 2011) Measurements of tumor volume are more precise (reproducible) than uni-dimensional tumor measurements of tumor diameter.  Longitudinal changes in whole tumor volume during therapy predict clinical outcomes (i.e., OS or PFS) earlier than corresponding uni-dimensional measurements.  Therefore, tumor response or progression as determined by tumor volume will be able to serve as the primary endpoint in well-controlled Phase II and III efficacy studies of cytotoxic and selected targeted therapies (e.g., antiangiogenic agents, tyrosine kinase inhibitors, etc.) in several solid, measurable tumors (including both primary and metastatic cancers of, e.g., lung, liver, colorectal, gastric, head and neck cancer,) and lymphoma.  Changes in tumor volume can serve as the endpoint for regulatory drug approval in registration trials. Biomarker claim statements are information-rich and may be used to set up the needed analyses. 11 11 11 11 11

  12. The user enters information from claiminto the knowledgebase using Specify Measurements of tumor volume are more precise (reproducible) than uni-dimensional tumor measurements of tumor diameter.  Longitudinal changes in whole tumor volume during therapy predict clinical outcomes (i.e., OS or PFS) earlier than corresponding uni-dimensional measurements.  Therefore, tumor response or progression as determined by tumor volume will be able to serve as the primary endpoint in well-controlled Phase II and III efficacy studies of cytotoxic and selected targeted therapies (e.g., antiangiogenic agents, tyrosine kinase inhibitors, etc.) in several solid, measurable tumors (including both primary and metastatic cancers of, e.g., lung, liver, colorectal, gastric, head and neck cancer,) and lymphoma.  Changes in tumor volume can serve as the endpoint for regulatory drug approval in registration trials. Categoric Continuous Continuous 12

  13. …pulling various pieces of information, Measurements of tumor volume are more precise (reproducible) than uni-dimensional tumor measurements of tumor diameter.  Longitudinal changes in whole tumor volume during therapy predict clinical outcomes (i.e., OS or PFS) earlier than corresponding uni-dimensional measurements.  Therefore, tumor response or progression as determined by tumor volume will be able to serve as the primary endpoint in well-controlled Phase II and III efficacy studies of cytotoxic and selected targeted therapies (e.g., antiangiogenic agents, tyrosine kinase inhibitors, etc.) in several solid, measurable tumors (including both primary and metastatic cancers of, e.g., lung, liver, colorectal, gastric, head and neck cancer,) and lymphoma.  Changes in tumor volume can serve as the endpoint for regulatory drug approval in registration trials. Intervention Target Indication 13

  14. …to form the specification. Measurements of tumor volume are more precise (reproducible) than uni-dimensional tumor measurements of tumor diameter.  Longitudinal changes in whole tumor volume during therapy predict clinical outcomes (i.e., OS or PFS) earlier than corresponding uni-dimensional measurements.  Therefore, tumor response or progression as determined by tumor volume will be able to serve as the primary endpoint in well-controlled Phase II and III efficacy studies of cytotoxic and selected targeted therapies (e.g., antiangiogenic agents, tyrosine kinase inhibitors, etc.) in several solid, measurable tumors (including both primary and metastatic cancers of, e.g., lung, liver, colorectal, gastric, head and neck cancer,) and lymphoma.  Changes in tumor volume can serve as the endpoint for regulatory drug approval in registration trials. To substantiate quality of evidence development To produce data for registration 14

  15. Formulate interprets the specification as testable hypotheses, Measurements of tumor volume are more precise (reproducible) than uni-dimensional tumor measurements of tumor diameter.  Longitudinal changes in whole tumor volume during therapy predict clinical outcomes (i.e., OS or PFS) earlier than corresponding uni-dimensional measurements.  Therefore, tumor response or progression as determined by tumor volume will be able to serve as the primary endpoint in well-controlled Phase II and III efficacy studies of cytotoxic and selected targeted therapies (e.g., antiangiogenic agents, tyrosine kinase inhibitors, etc.) in several solid, measurable tumors (including both primary and metastatic cancers of, e.g., lung, liver, colorectal, gastric, head and neck cancer,) and lymphoma.  Changes in tumor volume can serve as the endpoint for regulatory drug approval in registration trials. 1 2 Technical characteristic Type of biomarker, in this case predictive (could have been something else, e.g., prognostic), to establish the mathematical formalism 3 15

  16. …setting up an investigation (I), study (S), assay (A) hierarchy… • Technical Performance = Biological Target + Assay Method • Clinical Validity = Indicated Biology + Technical Performance • Clinical Utility = Biomarker Use + Clinical Validity • Investigations to Prove the Hypotheses: 1 2 • Investigation-Study-Assay Hierarchy: 3 16 Investigation = {Summary Statistic} + {Study} Study = {Descriptive Statistic} + Protocol + {Assay} Assay = RawData + {AnnotationData} AnnotationData = [AIM file|mesh|…]

  17. …and loading data into Execute(at least raw data, possibly annotations if they already exist) DISCOVERED DATA: …LOADING DATA INTO THE RDSM: Reference Data Set Manager: Heavyweight Storage with URIs …ADDING TRIPLES TO CAPTURE URIs: Knowledgebase: Lightweight Storage linking to URIs 17

  18. If no annotations, Execute creates them(in either case leaving Analyze with its data set up for it) Reference Data Set Manager: Heavyweight Storage with URIs Knowledgebase: Lightweight Storage linking to URIs Either in batch or via Scripted reader studies (using “Share” and “Duplicate” functions of RDSM to leverage cases across investigations) (self-generating knowledgebase from RDSM hierarchy and ISA-TAB description files) 18

  19. Analyze performs the statistical analyses… 1 2 3 19

  20. …and adds the results to the knowledgebase (using W3C “best practices” for “relation strength”). URI=45324 1 2 URI=9956 URI=98234 3 20

  21. PackageStructure submissions according to eCTD, HL7 RCRIM, and SDTM Section 2Summaries 2.1.Biomarker QualificationOverview 2.1.1.Introduction 2.1.2.Contextof Use 2.1.3.SummaryofMethodology and Results 2.1.4.Conclusion 2.2.Nonclinical Technical Methods Data 2.2.1.Summaryof Technical Validation Studies and Analytical Methods 2.2.2.Synopsesof individual studies 2.3.Clinical Biomarker Data 2.3.1.Summaryof Biomarker Efficacy Studies and Analytical Methods 2.3.2.Summaryof Clinical Efficacy [oneforeachclinicalcontext] 2.3.3.Synopsesof individual studies Section 3Quality <usedwhen individual sponsorqualifiesmarker in a specific NDA> Section 4Nonclinical Reports 4.1.Study reports 4.1.1.Technical Methods Development Reports 4.1.2.Technical Methods Validation Reports 4.1.3.Nonclinical Study Reports (in vivo) 4.2.Literature references Section 5Clinical Reports 5.1.Tabularlistingof all clinicalstudies 5.2.Clinical studyreports and relatedinformation 5.2.1.Technical Methods Development reports 5.2.2.Technical Methods Validation reports 5.2.3.Clinical Efficacy Study Reports [contextfor use] 5.3.Literature references

  22. Iterate: Reproducible Workflows with Documented Provenance (with illustration expansion of databases) 22 22 22 22 22 22

  23. 23

  24. Value proposition of QI-Bench • Efficiently collect and exploit evidence establishing standards for optimized quantitative imaging: • Users want confidence in the read-outs • Pharma wants to use them as endpoints • Device/SW companies want to market products that produce them without huge costs • Public wants to trust the decisions that they contribute to • By providing a verification framework to develop precompetitive specifications and support test harnesses to curate and utilize reference data • Doing so as an accessible and open resource facilitates collaboration among diverse stakeholders 24

  25. Summary:QI-Bench Contributions • We make it practical to increase the magnitude of data for increased statistical significance. • We provide practical means to grapple with massive data sets. • We address the problem of efficient use of resources to assess limits of generalizability. • We make formal specification accessible to diverse groups of experts that are not skilled or interested in knowledge engineering. • We map both medical as well as technical domain expertise into representations well suited to emerging capabilities of the semantic web. • We enable a mechanism to assess compliance with standards or requirements within specific contexts for use. • We take a “toolbox” approach to statistical analysis. • We provide the capability in a manner which is accessible to varying levels of collaborative models, from individual companies or institutions to larger consortia or public-private partnerships to fully open public access. 25

  26. QI-BenchStructure / Acknowledgements • Prime: BBMSC (Andrew Buckler, Gary Wernsing, Mike Sperling, Matt Ouellette) • Co-Investigators • Kitware (Rick Avila, Patrick Reynolds, JulienJomier, Mike Grauer) • Stanford (David Paik) • Financial support as well as technical content: NIST (Mary Brady, Alden Dima, John Lu) • Collaborators / Colleagues / Idea Contributors • Georgetown (Baris Suzek) • FDA (Nick Petrick, Marios Gavrielides) • UMD (Eliot Siegel, Joe Chen, Ganesh Saiprasad, Yelena Yesha) • Northwestern (Pat Mongkolwat) • UCLA (Grace Kim) • VUmc (Otto Hoekstra) • Industry • Pharma: Novartis (Stefan Baumann), Merck (Richard Baumgartner) • Device/Software: Definiens, Median, Intio, GE, Siemens, Mevis, Claron Technologies, … • Coordinating Programs • RSNA QIBA (e.g., Dan Sullivan, Binsheng Zhao) • Under consideration: CTMM TraIT (Andre Dekker, JeroenBelien) 26

More Related