CSR Quick Feedback Pilot
CSR Quick Feedback Pilot. Mary Ann Guadagno, PhD Senior Scientific Review Officer CSR Office of the Director. Pilot Objective. To collect feedback on CSR peer review in a survey
CSR Quick Feedback Pilot
E N D
Presentation Transcript
CSR Quick Feedback Pilot Mary Ann Guadagno, PhD Senior Scientific Review Officer CSR Office of the Director
Pilot Objective To collect feedback on CSR peer review in a survey • Evaluate the utility of asking reviewers in chartered study sections about their assessments of meeting experience: • Quality of Prioritization • Collective Expertise • Assignment of Applications to Reviewers • Quality of Discussion
Pilot Scope • Two CSR Integrated Review Groups (IRGs) • Genes, Genomes, and Genetics (GGG) • Dr. Richard Panniers • Brain Disorders and Clinical Neuroscience (BDCN) • Dr. Samuel Edwards • 18 CSR Study Sections (January –March 2014) • Very short questionnaire – 4 agreement statements with ability to answer in about 5 minutes and 1 open answer • Delivered via email • Completed near end of study section meeting
Agreement Statements and Comments – on line • S1 - The Panel was able to prioritize applications according to their impact/scientific merit. • S2 – The roster of reviewers was an appropriate assembly of scientific expertise for the set of applications in the meeting. • S3 – Assignment of applications to reviewers made appropriate use of their broad expertise. • S4 – The nature of the scientific discussions supported the ability of the panel to evaluate the applications being reviewed. • General Comments – In addition to the answers you provided in this questionnaire, please add any other comments in the text box below.
Verbatim Comments from Reviewers • CSR panels are generally high quality. • Clear commitment of all reviewers to fairly review applications. • Video review once a year is a great idea. • Assignments are balanced and appropriate. • Differing score calibration by reviewers is a problem. • Scoring is uneven among reviewers. Still have score inflation. • Should separate overall scientific impact rating from technical merit. • IAM was difficult to move back and forth between so many discussions.
What Did We Learn? • Identification of reviewer likes and concerns. • Some SRGs and some practices received constructive feedback. • Strengths and limitations of methodology. • Technical issues – email, survey software, compliance, ease of analysis. • Input for future surveys – next steps. • Platform evaluation • Input from program observers • Change over time
Acknowledgements • Charles Dumais • George Chacko • Mei-Ching Chen • Paul Kennedy • Amanda Manning • Adrian Vancea • Richard Panniers and GGG SROs • Samuel Edwards and BDCN SROs • Michael Micklin