1 / 4

Integration of Web Accessibility Metrics in a semi-automatic evaluation process

Integration of Web Accessibility Metrics in a semi-automatic evaluation process. Maia Naftali Osvaldo Clúa. Introduction. What has been done? Implement existing metrics in an accessibility evaluation tool. Wab Score Failure Rate UWEM Score

nasnan
Download Presentation

Integration of Web Accessibility Metrics in a semi-automatic evaluation process

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Integration of Web Accessibility Metrics in a semi-automatic evaluation process Maia Naftali Osvaldo Clúa

  2. Introduction • What has been done? • Implement existing metrics in an accessibility evaluation tool. • Wab Score • Failure Rate • UWEM Score • The evaluation process is semi-automatic: includes a human filter. • What for? • Compare results from different sites. • Analyze in a real scenario the difficulties of calculating metrics automatically.

  3. Difficulties found • Major difficulties • Metric accuracy: • Exact formulas with a variable input: how to achieve repeatable results when including human criteria. • Human Filtering is useful, but requires extra work and depends on evaluator’s knowledge, which is not ideal to implement in a large-scale scenario. • Extra parameters: • Calculating some parameters of the formulas that are not directly retrieved with the evaluation results can introduce an error. For example, the Failure Points. • Threshold criteria and tool accuracy: • Guideline checkpoints that are hard to test with an algorithm might add noise to metrics computing. • Not all the checkpoints are tested in some evaluation tools.

  4. Ideas to work on for metric integration • Metric categorization into levels • A possible categorization • Basic: These metrics should only use the checkpoints that can be assessed automatically (with an algorithm). For example: Does the ALT tag has the IMG tag?. • Semantic or extended: Metrics that use the entire set. • Pragmatic: Also measure the user experience. • Motivation: automatic tools will be able to calculate the metrics defined as “basic”, with a known error rate. Limiting the scope in the metrics input will facilitate their programmatic implementation. Therefore, any evaluation tool could calculate metrics at a know level of accuracy.

More Related