1 / 44

Usability with Project Lecture 10 – 10/10/08

Usability with Project Lecture 10 – 10/10/08. Dr. Simeon Keates. Exercise – part 1. Last week you were asked to bring in 4 items L andline telephone Mobile telephone TV remote control 1 other item T his week … Perform exclusion calculations on each product using the data on:

fern
Download Presentation

Usability with Project Lecture 10 – 10/10/08

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Usability with ProjectLecture 10 – 10/10/08 Dr. Simeon Keates

  2. Exercise – part 1 • Last week you were asked to bring in 4 items • Landline telephone • Mobile telephone • TV remote control • 1 other item • This week… • Perform exclusion calculations on each product using the data on: • http://www.eng.cam.ac.uk/inclusivedesign/

  3. Exercise – part 2 • Identify the common methods of interacting with the product • Identify which of the 7 DFS capability scales are involved in the interaction • Based on the DFS scales, estimate the limiting capability demand for each scale

  4. Exercise – part 3 • Report the number and %age of people excluded by each capability demand • For 16+ and 75+ • Report the total number and %age of people excluded by the product • For 16+ and 75+ • Prepare a 5 minute presentation to discuss: • Your exclusion calculation assumptions • Your exclusion calculation results • What were the principal causes of exclusion? • What do you think should be done to reduce the exclusion for each product?

  5. What is “reasonable accommodation”?

  6. Defining “reasonable accommodation” • Must offer “reasonable accommodation” • BUT what is reasonable? • Not defined explicitly • Companies left guessing • Will be defined in courts • Major risk/headache for companies

  7. IDEOLOGICAL DIVIDE Attitudes to “reasonable accommodation” EQUITABLE ACCESS EQUITABLE ACCESS MINIMUM (compliance) Access to functionality MINIMUM (compliance) Access to functionality IDEAL Access to functionality in same time IDEAL Access to functionality in same time Pragmatists Idealists

  8. Interesting questions for companies • Is the equitable access ideal possible? • Is the equitable access minimum possible? • “ Equal, but different ” problem • Users with functional impairments => longer times • Can technology always make up the difference in user capabilities? 3 case studies…

  9. Case study 1: The personal information point

  10. The information point accessibility assessment Sensory assessment: • Screen too high and not adjustable • Audio output not duplicated • Visual output not duplicated Motor assessment: • Need to stand • Reaching and dexterity demands • 45%of target users excluded Is this “reasonable”?

  11. Case study 2 – Cursor assistance for motor-impaired users Symptoms that can affect cursor control: • Tremor • Spasm • Restricted motion • Reduced strength • Poor co-ordination

  12. User group behaviours Peak velocities Target activation times No. of incorrect clicks

  13. Summarising the differences • Younger adults (IBM interns) • Shortest (1), fastest (1), more errors (3) - slapdash • “I can fix it” • Games culture? • Adults (IBM regulars) • Shorter (2), faster (2), fewest errors (1) • Best compromise between speed and accuracy? • Parkinson’s users • Longer (3), slowest (4), fewer errors (2) • Slow, but sure • Older adults • Longest (4), slower (3), most errors (4) • Vision difficulties? • Lack of experience

  14. A method of cursor assistance • Haptic gravity wells: Gravity well Attractive force Target

  15. Experimental set-up

  16. The effect of gravity wells Target

  17. Results - Throughput

  18. Case study 2 summary • Haptic gravity wells are clearly very helpful • MI users “with” on similar level to AB users “without” BUT: • AB users also improve “with” • Is this “equal” time? • Is this “reasonable”???

  19. Case study 3 – Paperless office • AN Other wants to move to a paperless office • Currently receives 3.5 million pages per day • Paper documents are stored as TIFFs • Section 508 accessibility requirements • Sight-impaired • Low vision • Current solution – employ readers • “ Equal, but different. ” • Is this reasonable?

  20. The study documents • Almost fully unconstrained • Content: • Unconstrained vocabulary • Text: • Typed • Handwritten • Annotated • Stamps • Graphical content: • Diagrams • Charts • Graphs

  21. Examples of the study documents

  22. Examples of the study documents (cont.)

  23. Examples of the study documents (cont.)

  24. Readability metrics (text) • Translation rates: • Character-by-character • Word-by-word • Number and %ages of errors: • Level 1 - Minor • Level 2 - Moderate • Level 3 - Serious

  25. 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 TIFF file OCR – The scanning process ............... ............... ....11111...... ...11...11..... ..11.....11.... ..11.....11.... .........11.... ......11111.... ....111..11.... ...11....11.... ..11.....11.... ..11.....11.... ..11....111.... ..111..1.11.1.. ...1111..111... ...............

  26. OCR – Possible sources of scanning errors Data LOSS NOISE

  27. OmniPage: “…*also *develop *the *skills *to *irxvert *containers *to *get *ob^ects *inside. *?e *should *begin *to *Znd *small *details *i? *a favorite *picture *baa? *?a *bird *in *a *true, *a *small *ash *in *the *ocean}. *his *understanding *of *familiar *ob^ects *should…” Recognita: “…*also *de???op *the *s?il?s *ta *ivart?an#ainer?to *e?ob??cts?n?id?. *?e *shau?ti *b?ta *Znd *srnali *details *i?a *favarita *picture *baa??bi?rd *in *a *tra?,a *srr?a????in *tk?e *o?ean}. *?is *und?rt?a?af *fa.?i?iar *ob?ects *hau?d *co??i?u?ta *de?eiap *d?i?houi d…” Comparing three OCR engines FineReader: “…also develop the skills to invert containers to get objects *inside. He should begin to find small details in a favorite picture book (a bird in a *tree, a small fish inthe *ocean). His understanding of familiar objects should…”

  28. OCR results – Calculating the error rates • Record the document properties • # of words, characters • Font types (e.g. typed, handwritten) and sizes • Count instances of error types • Redaction errors • Spaces +ed, -ed • Format errors (e.g. wrong case, incorrect text positioning) • Extraction errors (i.e. incorrect translation) • By character • By word • Classify severity • Level 1 – minor • Level 2 – moderate • Level 3 – severe • Calculate %age error rates Note: classification for sighted users

  29. OCR results – An example extracted document – 1 Original text: [Typed page document] Extracted text: *evaluators, shQWfag’an interest in imitating words *and sp *eech.^j^kd real words along^vith j argon to exjgpss. himself . *dflffVily indicated that they understand most of what tie *says.^H^^owedhisuse of two+ word phrases

  30. OCR results – An example extracted document – 2 Original text: [Typed page with notes document] Extracted text: *IBISES6?? *fc?day *?P *a *yearly *SJn *exam’ *She *is *a *40 *^ear *old *white *feraale status post ^aginal hysterectomy five years ago. She has continued to have some difficulty with loss || of urine upon coughing or sneezing. I had given her some samples of Ditropan last year but || *SShZ *^ *t0 *^ *theSe’ *ShS *feelS *that *her *wei^ contributes a ^reatleal *Z *££ problems *with *mcontmence She has had some continuing problems with depressive *sympW *S^e cries very easily and it is getting a little bit worse. She also feels very *withdrawn *She tells roe that her sister in Florida had a similar history and was on *Paxil and did.

  31. OCR results – An example extracted document – 3 Original text: [Pictures and Graphs document] Extracted text: *2j*»rlfar Cardiology || *^^m Chart: *3£4U3& *Dr *-^ || *0 _. *, Medications: *Adenosinc *Dose: || Dose: *jjj&f»- *f-^- *\ *Dobutaimne

  32. A “typical sentence” contains 7 words. An extraction error rate of 6.5% equates to 1 word error every 2 sentences. OCR results – Overall word error % rates

  33. OCR results – Context metrics • Text location awareness – PARTLY SATISFIED – columns only • Does the data extraction technology output provide an indication of where the text is on the page? • Table search – VERY LIMITED – recognised individual columns, not tables • Does the data extraction technology recognise tables and support searching within them? • Diagram detection – VERY LIMITED – recognised as “not text” • Does the data extraction technology recognise diagrams and support searching within them? • Graph detection – VERY LIMITED – as for diagram detection • Does the data extraction technology recognise graphs (charts) and support searching within them? • Dealing with uncertainty – SATISFIED – all engines highlighted uncertain text • Does the data extraction technology recognise entities on the page that it cannot translate and highlight this? • Text emphasis – PARTLY SATISFIED – could, but not always correct • Does the data extraction technology recognise when the author of the document has selected a particular item of text for special emphasis? • Multiple selection lists – VERY LIMITED – words and columns, but no “meta” info • Does the data extraction technology recognise multiple selection lists and can it identify the item(s) selected?

  34. Conclusions of OCR investigation “ Current OCR technology is not capable of providing an acceptable level of text extraction from medical evidence as it is now received.” “ Technology cannot provide equitable access in this case. Alternative methods are required. ” “ Equal, but different. ”

  35. Overall summary • Some products clearly not “reasonable” • Case study 1 • Technology cannot always make up for lack of user capability • Case study 2 • Even when it does – the goalposts move!!!

  36. Conclusion • What is needed is a framework for evaluating “reasonableness” • Based on quantifiable metrics • Reliable, repeatable, consistent, robust

  37. A framework for assessing acceptability – 1 • Stage 1 – Identify each target user group/persona • e.g. blind users, >65s, etc. • Stage 2 – Identify each component step in the interaction per group • e.g. press Enter, activate OK button, move cursor to icon, etc. • Stage 3 – Compare number of steps per group • e.g. 10 for able-bodied, 30 for blind using screen reader DECISION GATEWAY 1 Are the numbers of steps roughly equal? If not – differences need to be justified or remedied

  38. A framework for assessing acceptability – 2 • Stage 4 – Perform user studies with baseline user group • Calculate times, error rates, etc. • Stage 5 – Perform user studies with target user groups • Calculate times, error rates, etc. DECISION GATEWAY 2 Could all of the users complete the task? If not – causes of difficulties need to be removed or remedied

  39. A framework for assessing acceptability – 3 • Stage 6 – Compare error rates for each group • e.g. 2 per trial able-bodied, 5 per trial blind using screen-reader DECISION GATEWAY 3 Are the error rates the same or similar across user groups? If not – significant differences have to be justified or remedied

  40. A framework for assessing acceptability – 4 • Stage 7 – Compare times to complete tasks for each group + modifiers • e.g. number of component steps per group + • proportion of component steps affected by group disabilities + • relative importance of each step (3 = critical, 1 = peripheral) + • relative severity of the level of disability + • additional latencies from AT used DECISION GATEWAY 4 Are the modified times the same or similar across user groups? If not – significant differences have to be justified or remedied

  41. When we come back… • User trials • How to plan the trials • How to select users • How to conduct the sessions • How to analyse the data gathered • How to make design recommendations • Designing and evaluating for unusual circumstances • Airports • Mobile phones • Making the business case for usability • How to calculate the “bottom line” impact • Project • Finishing your design and then testing with “real” people!

  42. Exercise

  43. Exercise – part 1 • Perform an exclusion analysis on your web-site • (As you did on Wednesday) • Prepare a summary of your calculation • Assumptions • Levels of capability required • Exclusion (total and %age) for 16+ and 75+ • Make any changes necessary to your site • + any outstanding ones from last couple of weeks

  44. And finally… • Turn to the back page of today’s handout…

More Related