60 likes | 174 Views
This project aims to define and compare the accuracy, usability, speed, and affordability of software for automated DWI/PWI mismatch analysis. The evaluation focuses on tools developed at Stanford University, Nordic Neurolab, and Olea Medical, utilizing a structured Multiple Criteria Decision Making (MCDM) methodology. Key objectives include identifying performance metrics and aggregating data through various mathematical techniques. Expert workshops involving neurologists and medical physicists substantiate the weighting of evaluation criteria, promoting the most effective software selection for the EXTEND trial.
E N D
Multiple Attribute Evaluation of Automatic Co-registration Software Daniel Liu Leonid Churilov Soren Christensen Stephen Davis Geoffrey Donnan ISC 2009
Project Objective To define and compare the accuracy, usability, speed, and affordability of software available for the automated DWI/PWI mismatch analysis with the aim to utilize it for the purposes of EXTEND trial
RAPID: Developed at Stanford University, USA, LINUX based software • nordicICE Penguin Stroke Perfusion Module:Nordic Neurolab, Norway, developed in Centre of Functionally Integrated Neuroscience (CFIN), Aarhus University, Denmark • Perfusion Mismatch Analyzer:Acute Stroke Imagine Standardization Group, Japan, • Neuroscape/Perfscape: Olea Medical, France
Methodology – Multiple Criteria Decision Making • A structured scientific approach to handling subjective judgement • Analysis of problems with multiple criteria requires steps of (Belton & Stewart, 2002; Olson, 1996): • identifying objectives • arranging these objectives into a hierarchy and quantifying their relative importance • measuring how well available alternatives perform on each criteria • aggregating scores into a single measure using one of the multiattribute rating techniques • Objectives identification and weighting performed at problem structuring workshops with experts involving neurologists and medical physicists
Aggregation and Sensitivity Analysis • After measuring available alternatives on individual criteria, the resulting values are aggregated across criteria through a weighting procedure • Weights reflect the perceived importance of individual criteria and are elicited during problem structuring workshops with experts as indicated earlier • Several mathematical aggregating procedures are used for cross-validation purposes: • SMART: simple multiattribute rating technique • SMARTS: simple multiattribute rating technique with swing weighting • SMARTER: simple multiattribute rating technique exploiting ranks • AHP: analytical hierarchy processes • Sensitivity analysis with respect to weights is performed to estimate the robustness of the preferred software package