1 / 12

Results of IAC Study of Metrics in Electronic Records Management (ERM) Systems

Results of IAC Study of Metrics in Electronic Records Management (ERM) Systems. Dr. Rick Klobuchar Vice President and Chief Technology Officer SAIC -Enterprise Solutions Business Unit 2829 Guardian Lane Virginia Beach, VA 23452 richard.l.klobuchar@saic.com (757) 631-2335.

aquila
Download Presentation

Results of IAC Study of Metrics in Electronic Records Management (ERM) Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Results of IAC Study of Metricsin Electronic Records Management (ERM)Systems Dr. Rick Klobuchar Vice President and Chief Technology Officer SAIC -Enterprise Solutions Business Unit 2829 Guardian Lane Virginia Beach, VA 23452 richard.l.klobuchar@saic.com (757) 631-2335 Dr. Mark Giguere Lead IT (Policy & Planning) ERM E-Gov co-Program Manager Modern Records Programs NARA mark.giguere@nara.gov (301) 837-1744

  2. Introduction and Principal Conclusions • How does one measure the impact of an ERM system to the bottom line business or mission of an organization? • What is the business case for an enterprise ERM system? • Principal conclusions: • No silver bullet • No universal COTS tool or product • No one metric captures the success of an ERM system and relates unambiguously to the bottom line • Notwithstanding: Some common categories of metrics in use today • Some metrics less burdensome to capture than others • Some metrics just reflect a measure of IT system performance • Some metrics reflect mission success more directly than others • Measurement of ERM performance is currently immature • Most measurements tend to be IT-related rather than related to records management itself • Valid comparisons of ERM practices across organizations are difficult to make, and probably should not be made

  3. Bottom Line • The inescapable conclusion: • There is no simple, single answer! • There is no Swiss Army Knife-like tool • Tradeoffs must be made to arrive at metrics that are: • Meaningful to measure ERM success (e.g., “good” vs. “bad” metrics), and • Not too burdensome to capture on an enterprise-wide basis • “What gets measured is what gets done” • Aggregation of metrics into a single coherent picture of bottom line performance isproblematic

  4. Concerns to Consider • Metrics for Public Services Relating to ERM • Spirit of the eGovernment initiative is to provide a Government that “works better and costs less.” • Quantifiable and well-defined ERM metrics relating to capacity, throughput, security (especially data and records integrity), assured service availability, ubiquitous access, lower cost, improved turnaround times, etc. are of interest. • Also concerned about particular metrics that are unreliable, non-specific, intractable to interpret, or too burdensome or onerous to collect.

  5. Major Factors to Consider • Who is the Consumer? • Nature of the “consumer” is an important factor • “Who” and/or “what” the metrics are sampling • “Public at large” • Specific customers • Agency/company employees • Federal agencies, • Other government agencies • corporations, or • Foreign users, etc. • What is the ERM Business Practice? • What specific “bottom-line” agency and/or industry business practices the metrics supported. For example: • Servicing FOIA requests • Support for legal discovery • Historical research • Genealogy • Auditing and controls • Regulatory compliance • Public information dissemination • Statistical analysis • Archival records management • Grants management • ERM systems operations and management • Specific mission support (e.g., medical, environmental, emergency and disaster, defense)

  6. Principals in Defining ERM Metrics • Not everything that can be measured needs to be measured nor should it be • Metrics should have a purpose for continuing improvement • Best to design the capture and management of metrics into a system upfront or provide for an SLM approach • Important “paper vs. electronic” paradigm issues to be understood

  7. Broad Categories of ERM Metrics • Access to ERM Services • Accuracy • Capacity • Efficiency • Participation • Productivity • Search and Retrieval • System • User Satisfaction • Utilization • Legal * *Suggested to the IAC team by Robert Williams of Cohasset Associates

  8. “Good” vs. “Bad” Metrics • Many metrics are potentially ambiguous, intractable, unreliable, or burdensome to capture • Among the more problematic metrics: • Record search time • Record retrieval time • Number of seats (or licenses) • Session time, and the • Raw number of records in the system • All of the above can be captured • However, interpretation of each can be quite controversial • A long session time, for example, could be indicative of great success or utter failure • Search times can be curiosity-driven as in surfing the Web • Level of commitment and persistence of user can not be easily measured • Some people are just better than others at“finding things” • Training, domain knowledge, and time-of-daycan be important mitigating factors

  9. Sample Candidate Metrics for ERM Systems

  10. Sample Candidate Metrics for ERM Systems (cont.)

  11. Sample Candidate Metrics for ERM Systems (cont.) Note: Any of these metrics should be used to measure improvement over time relative to a baseline. The numbers are not meaningful in and of themselves. Additionally, the Study Group determined that there is no universal, “silver bullet” metric.

  12. Summary

More Related