1 / 17

Product Quality Metrics H. K. (Rama) Ramapriyan NASA/GSFC Metrics Planning and Reporting (MPAR) WG

Product Quality Metrics H. K. (Rama) Ramapriyan NASA/GSFC Metrics Planning and Reporting (MPAR) WG 10 th Earth Science Data Systems Working Group Newport News, VA November, 2011. Product Quality Metrics. Overall Objective:

quiana
Download Presentation

Product Quality Metrics H. K. (Rama) Ramapriyan NASA/GSFC Metrics Planning and Reporting (MPAR) WG

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Product Quality Metrics H. K. (Rama) Ramapriyan NASA/GSFC Metrics Planning and Reporting (MPAR) WG 10th Earth Science Data Systems Working Group Newport News, VA November, 2011

  2. Product Quality Metrics • Overall Objective: Given that the objective of the MEaSUREs program is to produce and deliver readily usable Earth Science Data Records of high quality: • Define program level metric(s) that permit assessment of the steps taken / progress made to ensure that high quality products are provided by MEaSUREs projects and the MEaSUREs program as a whole. • Develop a draft recommendation for Product Quality Metric(s) that would then go through the regular MPARWG review process. • Recommendation from the October 2010 [ESDSWG] meeting: • Develop a checklist of a small number of questions that represent progress in the product quality area. • We considered product quality to be a combination of scientific quality and the completeness of associated documentation and ancillary information, and effectiveness of supporting services. • The responsibility for the product quality is shared between the projects generating the ESDRs and the DAACs that eventually archive and distribute them.

  3. Product Quality Metrics Concept Some questions to be answered by Projects, some by DAACs, some by both. Question set addresses: Science Quality, Documentation Quality, Accessibility / Support Services, Usage / Satisfaction Question Set Checklists are completed for ESDR’s or group of ESDR’s handled together. Project Checklist Template DAAC Checklist Template Completed DAAC ESDR Checklists Completed Project ESDR Checklists Summary Roll-Up produced from Checklists The Summary roll-up contains the product quality metrics that are provided to NASA HQ. These may be aggregated across the MEaSUREs program. Project Level Roll-Up

  4. Product Quality Metrics • Robert Frouin’s quality criteria and Deborah Smith’s set of product quality questions served as a starting point for an early draft of the question set. The draft question set was discussed in a small telecon on March 17, 2011, and refined. • On June 13, 2011, the draft question set and draft project and DAAC checklists were distributed to the projects and the MPARWG for review. • Comments were received from 10 MEaSUREs projects and 2 data center representatives. • A small paper presenting the reviewer comments, the question set and draft checklists (with some changes based on the comments received) was distributed to the MEaSUREs projects and MPARWG on October 21, 2011, in preparation for this meeting.

  5. Product Quality Metrics – Session Plan • Go through this presentation more or less quickly to see the big picture, then - • Do a walk-through review of the draft Question Set and draft Project and DAAC Checklists. • Go area by area; add to, delete, or improve the questions. • Reflect changes to the questions in the draft checklists as they apply. • Discuss how the roll-up from the checklists to an overall project level summary would work: • What should the roll-up consist of, i.e. how best to summarize the checklist information. • This needs to be iterated between projects / DAACs and Martha / Rama to ensure that the program level folks get what they need. That might be a program level aggregate across the projects. • A possibility is that Rama could produce a strawman of the roll-up from an initial set of Project / DAAC checklists. • Decide on next steps; how do we get to an MPARWG Recommendation.

  6. Product Quality Metrics – Session Plan • Product Quality considers: • Science Quality, • Documentation Quality, • Accessibility & Support Services Quality, • Usage and User Satisfaction • The Question Set that follows addresses each area in turn, and indicates how the questions would apply to Projects and DAACs. • A strong Project-DAAC partnership is the key to success.

  7. Science Quality Level Questions (part 1 of 2) The use of the term ‘Validation’ was a major topic of comment, with some objecting to the use of the term, others accepting it but with their own ideas of what it means. The suggested changes to question 4 retain the term but provide a general definition. If the definition is OK (w/changes?), but the term ‘validation’ still offends, the term could be deleted.

  8. Science Quality Level Questions (part 2 of 2) Question 8 is a reviewer recommended addition.

  9. Documentation Quality Level Questions The addition to question 1 was recommended by a reviewer.

  10. Accessibility / Support Services Quality The addition of “and services” to question 3 was recommended by a reviewer.

  11. Usage / Satisfaction Questions

  12. Draft Project Checklist Per a reviewer recommendation, responses to all questions could include comments, and would use a text format such as Word that would facilitate commenting.

  13. Draft DAAC Checklist Per a reviewer recommendation, responses to all questions could include comments, and would use a text format such as Word that would facilitate commenting.

  14. Product Quality Metrics – Other Comments • Reviewer comments were included in the small paper distributed prior to this meeting. • A few highlights: • “Also, what answers might trigger someone to decide that a dataset is not "ready to distribute" (is this at all a goal of the checklists?).” Suggested response: The checklists could be used as an aid by the P.I. and DAAC as they decide if a data set is ready for distribution but are not intended to be determinative in themselves. • “The more detail we require the more onerous the task of filling out the questionnaire becomes. As I recall from the telecon, the goal was to collect "information about information"... as the questions do.” Suggested response: The level of detail needed would be up to the P.I. and DAAC – it need only be sufficient to allow compilation of the summary metrics that would be reported to NASA HQ. • “What is complete metadata or a complete description? What one person sees as complete another might see as incomplete.” Guidance for ‘complete’ is needed. The MEaSUREs solicitation requirements are “Any data products shall contain and be searchable by FGDC-compliant metadata …”, “ensure that the MEaSUREs project’s products ingested at the EOSDIS Data Centers are searchable in ways similar to other products at those Data Centers”, and “Use the Global Change Master Directory (GCMD) as one means to announce the availability of project products and services”. • “I would preface all of these by, could an entry level graduate student in the earth sciences ...” A general question – should the suggestion that the general objective that documentation, services, etc., be readily useful to the entry level graduate student in the Earth sciences be adopted as guidance?

  15. Product Quality Metrics – Next Steps • Possible next steps: • Continue work on questions / checklists, reach agreement on a first version to work with. • Projects & DAACs compile initial set of checklists, P. I.s send to Rama. • Rama creates a strawman set of project level summary roll-ups, and an aggregated program level roll-up, sends back to P.I.s. • Telecon to discuss, modify, etc., the summary roll-ups. • Draft MPARWG recommendation for product quality metrics (i.e., the agreed summary roll-ups).

  16. Background

  17. Measuring Product Quality – Aug. 17, 2010 Telecon • Agreed approach for developing product quality metrics: • Using Robert Frouin’s input as a starting point, develop a set of criteria or categories pertaining to product quality. • For each criterion, develop a set of questions, in the form of ‘has the project done this’ where ‘this’ would be a specific item or task (e.g. is each product validated, is each well documented). This would amount to a checklist that the project would gradually complete. • The degree to which the questions for a criterion are answered ‘yes’ at any point in time would measure the degree to which the project meets that criterion, and a ‘high’, ‘medium’, or ‘low’ or similar completeness scale could be developed. • As the work of the project proceeds, more of the questions for the criteria would be answered in the affirmative, providing a measure of the project’s progress toward fully meeting the criteria. • Develop condensed or summary program level metrics based on the project’s progress on the criteria (i.e. the project’s answers to the questions). • Track the program level metrics over time. • Aggregate the program level metrics across the projects to provide a measure of the MEaSUREs program’s overall progress.

More Related