1 / 22

Blind Text Entry for Mobile Devices

AAATE’05 Henri Warembourg Faculty of Medicine Lille - France - September 6-9, 2005. Blind Text Entry for Mobile Devices. Georgios Yfantidis NOKIA R&D Copenhagen, Denmark Georgios.Yfantidis@nokia.com. Grigori Evreinov Dept. of Computer Sciences University of Tampere, Finland

lovey
Download Presentation

Blind Text Entry for Mobile Devices

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AAATE’05 Henri Warembourg Faculty of Medicine Lille - France - September 6-9, 2005 Blind Text Entry for Mobile Devices Georgios Yfantidis NOKIA R&D Copenhagen, Denmark Georgios.Yfantidis@nokia.com Grigori Evreinov Dept. of Computer Sciences University of Tampere, Finland Grigori.Evreinov@cs.uta.fi

  2. Introduction • Adapting technologies to issues of challenged users • Tack about this here Blind people could also benefit from using those touchscreen based portable devices • Touchscreens: handheld devices, smart phones and PDA • Blind touchscreen interaction • - Feedback, absolute point and selection • - Virtual keyboard ???…tapping is out of the question • - accuracy and point relocation after each character entry... • ARIAL 22-24 is better everywhere • Handheld devices with touchscreens are becoming widespread. (Smart phones and Personal Digital Assistants). • Diverse set of applications / targeted to many users • Blind people could also benefit from using those touchscreen based portable devices • Touchscreen interaction (combined with GUIs) is completely antithetical to blind user skills • No feedback cues / Absolute point and selection is difficult / Virtual keyboard tapping is out of the question. Needs accuracy and point relocation after each character entry.

  3. Introduction 2 • Some of the inspiration came from Gedrics • We developed a new text entry technique (Gesture Driven Software Button) • Adequate typing speed + Accessible for blind users =NOT NEEDED here, maybe in conclusions

  4. Introduction 3 • Gedrics provide a way to manipulate a graphical user interface with the help of icons that respond to gestures made directly inside the icon. • The most distinctive aspect of Gedrics was their “structuredness” and their intuitive way of gestural interaction. • We wanted to maintain a similar simplicity that would make the gesturesGDSB reliable, easy to learn and universally accessible. • BUT.. While in Gedrics icons occupied definite positions on a touchscreen, we have envisaged that it could bring more benefits if the icon could follow the stylus and could contain full text-entry functionality – “ a full keyboard under the fingertip”. You can comment this in previous slide So you have to present the ppt but the report/text can be different if you need it

  5. Interface and Interaction style The gesture driven software button has a rectangular shape that contains eightcharacters positioned in the basic arrow directions (up, down, left, right) and theintermediate positions between them. There are three layers in total, each containing adifferent set of the characters realized for English alphabet, 24 characters directly arrayed in the three layers plus 2 characters that are embedded in certain positions. Text is Not needed in slide

  6. Layer accession methods/modes • GDSB has two different modes for layer accession. • ADS mode changes layers cyclically, in an adaptive time loop • 2K mode bases the layer change in two physical arrow keys . • In both modes touching anywhere on the screen with stylus/finger activates the software button’s interface with the first layer ready to use.

  7. ADS Mode • One layout succeeds the other after dwell time interval • Layerswitching does NOT happen continuously. It is triggered manually when user waits incentral position without lifting the stylus after the initial tap. // just your comments • Dwell time starts afterthe following events: - initial touching of the screen (central position of any layer), - stopping after backtracking to start position (cancel of the previous selection), - gesturing (movement) towards a direction was completed without lifting the stylus, thatis, dwell determines when the substitution function will be activated.

  8. Adaptation • The adaptation of the button takes place by changing the position and the functionality, • but the time is the main factor that determines how this adaptation and transitions happen is • One of the design decisions to be taken in the development of the technique concerned is which time interval for dwelling satisfies the user requirements and when it should be changed. The original algorithm [Evreinov and Raisamo, 2004] was modified and improved concerning the proposed technique. (small fonts => just talk about)

  9. 2K Mode • The 2K (2-key) version of our technique features a layer-switching system that isbased on two physical arrow keys. Interacting with the gesture driven button withoutpressing any physical key happens within the first layer. • Up-Down keys transferinteraction in the respective layer. • The text entry can be concurrent in this way. Thelayer access is happening in combination with the movement/gesture.

  10. Entering Text • The first action that the user has to plan whens/he enters text or command is to select the correct layer in which the character belongs.to. This can be a sequential or a concurrent action depending on the technique used. • The actual text entry begins when user moves/slides the stylus towards one of the eightdirections which encode the characters. After this sliding there is a speech auditorysignal with the name of the character that is about to be entered. • Blind or sighted usersare relying on this auditory feedback to smoothly and successfully interact with theapplication. The lifting of the stylus signifies the end of the text entry gesture. • If forany reason, user would like to cancel a selection instead of lifting, s/he can backtracktowards the start position.

  11. Substitution Process • The layers can only accommodate 24 characters in directly accessible positions. • Other special characters, signs and operations have to be activated through a process of“substitution”. • The concept is that some of the primary characters may have a coupledfunction or another character in the same direction, which can substitute the basicharacter when user decides to do so. • The differentiation from the normal way ofselecting and entering is the use of waiting time that follows the sliding towards adirection. • Instead of lifting the stylus after the first feedback cue, user can wait in thesame position to hear a second signal. Lifting of the stylus at this point will result insuccessful entry of the symbol or operation that “dwells behind” the primary character.

  12. Substitution 2 There are certain mnemonic rules that led us to couple certain characters andfunctions witheach other. For example “D” can be associated with “Delete”, “S” with“Space” and “N” with “Next Line” in terms of their initial letters correspondence.

  13. Method Evaluation • 16 volunteers (Staff and students from UTA) divided in two groups. • 2K mode testing (6 male 2 female) • ADS mode testing (3 male 5 female) • None of the participants had experience in text entry with gestures. • 14 right handed, 2 left handed • Blindfolded users • GDSB layout hidden at all times • The study was carried out on iPAQ Pocket PC 3800 series. • Tests took place in a usability laboratory

  14. Test • One exploratory trial to get acquainted with the feature. During that trial participants were tutored about the interface and the interaction style. • One trial consisted of entering twenty words, randomly selected from a set of 150 words, and displayed one at a time at the top line on the screen of experimenter. • The blindfolded subjects only listened to the test word and they repeated its playback on demand by clicking on the left bottom corner of touchscreen. • The test words were 6 - 13 characters in length, with mean 8.5. • Each of the subjects accomplished 10 trials, the last eight of which were taken for statistical analysis. • 10880 characters were entered per each mode. • Key figures such as the number of errors per trial, motor reaction times, average reply time and parameters of dwell were stored for each trial in a data array.

  15. Results • During the experiments with 2K version, speed varied from 10 to 22 words per minute • and the average typing speed was about 15 wpm or 75 characters per minute. • The average typing speed with ADS was about 9 wpm (standard deviation 0.8),which translates to about 45 characters per minute. • Taking into account that this is ablind interaction through touchscreen and it also supports one-hand manipulation, thiscan be considered as a reasonable typing rate.

  16. Errors • Contrary to what might have been the logicalexpectation, errors proved not toaffect typing performance for both versions. • The subject that achieved the maximaltyping speed with 2K version had the same amount of errors with subject 7 whichperformed a lot worse. • Subjects 3 and 6, who shared the same figure for typing speed, had a big difference in the amount of errors that they committed • The subjects with the second and third best typing performance had more errors than 50% of the total persons who took part on the experiment.

  17. The typing speed and errors averaged on 8 trials for all subjects and both modes of thetechnique (2 × 1280 words, 2 × 10880 tokens).

  18. Errors 2 • About 30% of the errors duringthe experiment happened because of software was written in Visual Basic and someprocedures could not be fast enough due to restrictions of hardware. The processorspeed of the PDA used was 200 MHz. • Sometimes the users were typing so fast that the character was not entered, and they had to repeat it, while a continuity and rhythm arevery important for typing. • Since the errors were occurring mostly to the faster users at the peak of their performance, they were able to rapidly correct their error by re-entering the character. • Prevention of speed loss in error situations was enhanced by the inherent compactness of the software button (23 × 23 mm 2 ), which has a very small“dead zone” (8 × 8 mm 2 ) and provides fast gestures recognition.

  19. Reaction time and performance The average motor reaction time (and STD) per character, and the typing speedin words per minute (8 trials, 1280 words, 10880 characters) with 2K technique

  20. Figure 2 shows that an inverse correlation between the reaction time with 2Kversion and the performance of the test subjects regarding typing speed is high enough-0.9898. • That is, the greater efficiency in typing is proportional to a consistentlyreduced reaction time. • Also subjects demonstrated a clear tendency to react faster asthey proceeded through the test sessions. • For instance, the average reaction times forsubject 1 in ms decrease from 970 during the first session to 640 by the last session. • The rest of the subjects followed a similar pattern and they were gradually improvingtheir response time. Furthermore, all of them managed to reach, at least once during theexperiment, values close to the average reaction time of about 18 wpm. • When dwelltime is being used (ADS version), a calculation of the reaction time is occurreddynamically. Since the reaction time value is part of the dwell time, there is not areasonable way to discuss the raw values of the response. • Just comments to previous slide

  21. Selection times per layer • The average selection time of the characters in the first layer was about 760 msusing the 2K version. • The average values for second and third layer were about 880 and1180 ms respectively. • Those differences in time derived mostly from the differentfrequency of the characters used during the test that each group (layer) of the charactershas. • In that sense, the results are expected and slow selection time in the third layercame mostly because the characters in that layer occur infrequently in the English language, (less than 9%.) • Users did not have the chance to practice enough entering thosecharacters, and the layout of the third layer failed to become as “intuitive” in its use asthe commonly used first two layers. • Thus, the figure for average selection time in the third layer is subject to significant decrease after experience with the technique is established. Do it like in previous slide (comments are not needed to show)

  22. Summary • We have developed two versions of a text entry techniquethat is suitable for blindpeople. The method can be adapted to many platforms. • We tested the technique in aPDA to assess its suitability and effectiveness for smart mobile devices. • The testsindicated that average text entry speed is in a high level for 2K version (15-18 wpm)while the ADS method is a reasonable tradeoff that balances speed (8-12 wpm) andaccessibility offering single-hand manipulation. • Another positive outcome of thetesting is that typing errors with both versions of our technique do not threaten itseffectiveness, as the majority of the users can easily correct them on the spot. • Having astrong tool for text entry is the first but definite step to finally give full accessibility ofsmart mobile devices to visually impaired users. On mouse click after that click above Thank you and Merci Beaucoup!!!

More Related