slide1 n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Agenda : New benchmark ICC and Dice’s Training set for Naïve Beta testers proposals PowerPoint Presentation
Download Presentation
Agenda : New benchmark ICC and Dice’s Training set for Naïve Beta testers proposals

Loading in 2 Seconds...

play fullscreen
1 / 14
Download Presentation

Agenda : New benchmark ICC and Dice’s Training set for Naïve Beta testers proposals - PowerPoint PPT Presentation

ayita
137 Views
Download Presentation

Agenda : New benchmark ICC and Dice’s Training set for Naïve Beta testers proposals

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. XII PMT meeting – September 26, 2012 • Agenda: • New benchmark ICC and Dice’s • Training set for Naïve • Beta testers proposals • Congress presentations

  2. Benchmark Images ADNI scans: 2 x 5 Scheltens’s atrophy score x 2 sides x 2 magnet strengths (1.5-3T) Total per rater: 40 hippos 5 Master Tracers’ segmentations: Corrections after check for overlapping discrepancies Improved Harmonized Protocol (HP) in many points HP re-sent to panelists

  3. Benchmark Images Overlapping agreement Volume ICCs 1.5T images: 0.73 3T images: 0.75 Maximum level for human tracers???

  4. ? TIMING for Certif. Platform ? Will need to recap with available tracers in the 20 centres Training set for Naïve Tracers • Liana Apostolova (7T) • Brescia - Normative Archive • (1.5 and 3T; • 12 images (different severity) per magnet field strength?)

  5. Publication Policy STANDARD FORM for submission: TITLE of PROPOSAL: PI / CENTRE: AIM of the PROJECT: METHODS of the PROJECT: WHAT IS ASKED to the SC of the HARMONIZED HIPPOCAMPAL PROTOCOL WHAT IS OFFERED to the HARMONIZED HIPPOCAMPAL PROTOCOL PROJECT/COMMUNITY:

  6. Beta tester submissions Masami Nishikawa (comparison ofVSRAD performance; segmentation of human phantoms x HP) Lei Wang (library of atlases; validation vs other algorithm/s x HP?) Ronald Pierson (train ANN of Brain Image Analysis, LLC validation vs other algorithm/s x HP?)

  7. Beta tester submissions Lei Wang: use of “local” labels “construction of a library of atlases for the purposes of mapping MR scans such as the ADNI data set” Asks: use of label already segmented based on local protocol Asks: to be informed as to when harmonized labels are available Offers: solving the conversion of MultiTracer files into more common format

  8. Beta tester submissions Masami Nishikawa Project aim: to validate the new version of VSRAD (Voxel-based Specific Regional analysis system for Alzheimer’s Disease), comparing it versus the Harmonized Protocol as the gold standard method to manually measure hippocampal volume. Asks: Harmonized Protocol Offers: Hippocampal segmentations on 3 healthy volunteersscanned twice by the 7 different machineswill contribute for the validation of the Harmonized Protocol.

  9. Beta tester submissions Ronald Pierson Project aim: Compare the results of the current hippocampal segmentation with other hippocampal definitions using BRAINS (Brain Research: Analysis of Images, Networks, and Systems) Asks: Harmonized Protocol Asks: ADNI IDs of benchmark subjects Offers: not clear

  10. Congress Presentations • CTAD (October, 29-31 2012) • Definition of the Harmonized Protocol for Hippocampal Segmentation • DGPPN Symposium 2012 Berlin (Andreas Fellgiebel) • Segmentation of the hippocampus: Towards a joint EADC-ADNI harmonized protocol • AAN 2013 • EADC-ADNI Benchmark Images of Harmonized Hippocampal Segmentation

  11. Papers describing the project Survey of protocols (preliminary phase; published, JAD 2011) Operationalization (preliminary phase; I revision, Alzheimer’s & Dementia, MS n. ADJ-D-12-00094) Axes check short report (Brescia Team, in progress) Delphi consensus (Brescia Team, in progress) Master tracers’ practice and reliability (Brescia Team, in progr) Development of certification platform (Duchesne and coll) Validation data (Brescia Team – companion paper 1) Protocol definition (Brescia Team – companion paper 2) Validation vs pathology (TBD) DONEIN PROGRESSPLANNED

  12. GOLDSTANDARD VALIDATION VS CURRENT PROTOCOLS ASSESSMENT OF SOURCES OF VARIANCE TRAINING SET DEVELOPMENT VALIDATION VS PATHOLOGY 20 naïve tracers 5 master tracers 1 tracer Local Protocol ADNI scans: 2 x 5 Scheltens’s atrophy scores x 2 sides x 2 magnet strengths (1.5-3T) Training ADNI scans: 10 at 1.5T x 2 sides x 7 SUs x 2 tracing rounds Total per rater: 40 hippos QUALIFICATION QUALIFICATION Harmonized Protocol ADNI scans: 2 x 5 Scheltens’s atrophy score x 2 sides x 2 magnet strengths (1.5-3T) Total per rater: 40 hippos Harmonized Protocol ADNI scans: 2 x 5 Scheltens’s atrophy score x 2 sides x 2 magnet strength (1.5-3T) Total per rater: 40 hippos Harmonized Protocol: Pathological datasets: Mayo Clinic and NYU Total: about 40 hippos Assessment of variance due to rater and center REFERENCE PROBABILISTIC MASKS with 95% C.I. Best 5 naïve tracers Assessment of agreement with volume on pathology or ex vivo MRI and correlation with neuronal density Harmonized Protocol ADNI scans: 2 sides x 5 Scheltens’s atrophy scores x 3 time points (bl-1y-2y) x 3 scanners (+ retracing @ bl) x 2 magnet strengths (1.5-3T) Total per rater: 240 hippos Assessment of variance due to side, trace-retrace, atrophy, time, scanner, rater TRAINING SET

  13. GANTT

  14. CSF exclusion (n. labels) using a single CSF label to exclude a large CSF pool using the same CSF label containing two (or more) separate CSF area on the same slice, for each hippocampus