1 / 95

476 Midterm Results

# of students. Score. 476 Midterm Results. Midterm average: 75.6. Electrical and Computer Engineering Dept. Human Factors in VR. User (programmer, trainee, etc.). System architecture. Human Performance Efficiency. Health and Safety. Societal

Download Presentation

476 Midterm Results

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. # of students Score 476 Midterm Results Midterm average: 75.6

  2. Electrical and Computer Engineering Dept. Human Factors in VR

  3. User (programmer, trainee, etc.) System architecture

  4. Human Performance Efficiency Health and Safety Societal Implications Human factors in VR (Stanney et al., 1998)

  5. Human factors in VR Will the user get sick in VR? How should VR technology be improved to better meet the user’s needs? ? Which tasks are most suitable for users in VR? How much feedback from VR can the user process? Which user characteristics will influence VR performance? Will the user perceive system limitations? Will there be negative societal impact from user’s misuse of the technology? What kind of designs will enhance user’s performance in VR? (Stanney et al., 1998)

  6. Human factors vocabulary • HF study – series of experiments in very rigorous conditions aimed at the user (can be controlled or case study); • Experimental protocol – establishes a structured sequence of experiments that all participants need to perform; • Trial – a single instance of the experiment; • Session - a sequence of repeated trials; • Rest period – time between sessions; • Experimental database – files that store experimental data; • Institutional Review Board (IRB) – watchdog office regulating HF experiments • Principal Investigator (PI) – person conducting the HF study. Needs to be certified by the IRB

  7. H. F. vocabulary - continued • Subject - a participant in a HF study (male or female, age, volunteer or paid, right handed or left handed, normal or disabled, etc); • Experimental group – subjects on which the experiments are done; • Control group – a number of subjects used for comparison with the experimental group; • Controlled study – a study that uses both an experimental and control group • Case study (also called pilot study) – smaller study with no control group. • Consent form – needs to be signed by all participants into the study; • Baseline test – measurement of subject’s abilities before trial;

  8. Human Performance Efficiency Human factors in VR Societal Implications Health and Safety (Stanney et al., 1998)

  9. Determine focus Develop experim. protocol Recruit subjects Conduct study Analyze data The stages of human factors studies

  10. Determine focus Develop experim. protocol The stages of human factors studies Recruit subjects Conduct study Analyze data

  11. Human factors focus • What is the problem? (ex. People get headaches) • Determines the hypothesis (ex. Faster graphics is better); • Establishes type of study (usability, sociological, etc.); • Objective evaluation, subjective evaluation or both? • …

  12. Determine focus Develop experimental protocol The stages of human factors studies Recruit subjects Conduct study Analyze data

  13. Experimental protocol • What tasks are done during one trial? • How many trials are repeated per session? • How many sessions per day, and how many days for the study? • How many subjects in experimental and control group? • What pre and post-trial measurements are done? • What variables are stored in the database? • What questions on the subjective evaluation form?

  14. Determine focus Develop experim. protocol The stages of human factors studies Recruit subjects Conduct study Analyze data

  15. Subject recruitment • Sufficient number of subjects need to be enlisted in the study to have statistical significance; • Place advertisements, send targeted emails, web posting, go to support/focus groups, friends, etc.; • Subjects are screened for unsuitability to study; • Subjects sign consent form; • Subjects are assigned a code to protect their identity; • Subjects sign release for use of data in research, • Subjects may get “exposure” to technology;

  16. Determine focus Develop experim. protocol The stages of human factors studies Recruit subjects Conduct study Analyze data

  17. Determine focus Develop experim. protocol The stages of human factors studies Recruit subjects Conduct study Analyze data

  18. Data Collection • VR can sample much larger quantity of data and at higher temporal density than classical paper-and-pencil methods; • Data recorded online can be played back during task debriefing and researchers do not have to be co-located with the subjects (remote measurements); • Measurements need to be sensitive (to distinguish between novice and expert users), reliable (repeatable and consistent) and valid (truthful); • Latencies and sensor noise adversely affect these requirements.

  19. Data Analysis • Experiments store different variables, depending on the type of test: • task completion time – time needed to finish the task (can use system time, sequence of actions, or stopwatch); • task error rate – number or percentage of errors done during a trial; • task learning – a decrease in error rate, or completion time over a series of trials; • Analysis of Variation (ANOVA) – statistical package used to analyze data and determine if statistical difference exists between trials or conditions.

  20. Error rates Standard deviation Average Task learning Trial number 1 2 3 Data analysis - continued Learning results in less errors and more uniform performance among subjects

  21. Group C (very difficult task) Error rates Group A Group B (very easy task) Trial number 1 2 3 Effect of prior knowledge on task learning Data analysis - continued

  22. N i=1 fi Average force =________ N where N is the number of data samples and fi is the magnitude of the i-th force Data analysis - continued • Task learning time and error rates are applicable to VR in general; • Performance measures which are modality specific – for example for force feedback - Average contact force – the forcefulness of the interaction with a virtual object

  23. Data analysis - continued • Another modality-specific performance measure is the cumulative contact force. Higher cumulative forces/torques indicate higher subject’s muscle exertion • This can lead to muscle fatigue of haptic interface premature wear. s which are modality specific – for example for force feedback - Average contact force – the forcefulness of the interaction with a virtual object • There are also task-specific performance measures, such as those associated with cognitive tasks (heart rate, muscle tone, etc.) Cumulative force = where t is the sampling interval N i=1 fi t

  24. Usability Engineering • A subclass of human factors research to determine the ease (or difficulty) of use of a given product; • It differs from general-purpose VR human factors studies which are more theoretical in nature; • Usability studies are product-oriented and part of the product development cycle. • There are no clear standards, because this is an are of active research.

  25. Usability Engineering • The methodology consists of four stages: User task analysis Expert guidelines- based evaluation Formative Usability evaluation Summative evaluation

  26. Sea Dragon military command and control application

  27. Usability Engineering • The first stage – define the task and list user’s actions and system resources needed to do it; • Identifies the interrelationships (dependencies and order sequences) and user information flow during the task; • Poor task analysis is a frequent cause of bad product design. • For Dragon, the task is 3-D navigation and object (symbol) selection and manipulation. • it differs from classical 2-D maps and symbols. User task analysis Expert guidelines- based evaluation Formative Usability evaluation Summative evaluation

  28. User task analysis Expert guidelines- based evaluation Formative Usability evaluation Summative evaluation Usability Engineering • The second stage (sometimes called heuristic evaluation) aims at identifying potential usability problems early in the design cycle. • A pencil-and-paper comparison of user’s actions done by experts, first alone, and then as a group (to determine consensus); • For Dragon, ease of navigation was identified as a critical issue; experts identified problems with the system responsiveness, when using a flight stick (wand with buttons) and performing “exocentric” navigation (the user was outside of the environment, looking in).

  29. User task analysis Expert guidelines- based evaluation Formative Usability evaluation Summative evaluation • The third stage is an iterative process where representative users are asked to perform the task; • During task performance variour variables are measured, such as task completion time and error rates. These are used to do product re-design and the process is repeated; • Dragon formative evaluation had two stages. During the first stage the best interface was selected between three candidates (PinchGlove, voice recognition and wand). Voice recognition was ineffective, and PinchGlove produced time delays when transferring to another user. Thus wand was selected. Usability Engineering

  30. User task analysis Expert guidelines- based evaluation Formative Usability evaluation Summative evaluation Usability Engineering • The second stage of Dragon formative evaluation used a large number of subjects that had to navigate, while errors were recorded. • A large effort was made in mapping the wand button to functions. Pan and zoom were mapped to the wand trigger, pitch and heading to the left button, while exocentric rotate and zoom were mapped to the right button

  31. User task analysis Expert guidelines- based evaluation Formative Usability evaluation Summative evaluation Usability Engineering • The last stage is Summative evaluation which is done at the end of product development cycle. It is done to statistically compare the new product with other (competing) products to determine which is better. The selection among several candidates is done based on field trials and expert reviews. • The summative evaluation of Dragon involved the study of four parameters: navigation metaphor (egocentric or exocentric), gesture mapping (rate or position control of camera), display device (workbench, desktop, wall or CAVE) and graphics mode (stereo or mono)

  32. Usability Engineering • The summative evaluation of Dragon involved thirty two subjects divided in groups of four. Each group was assigned a different combination of conditions.

  33. Usability Engineering • Results showed that users: • performed fastest on a desktop monitor; • were slowest on the workbench. • Egocentric navigation was fastest in monoscopic graphics • Exocentric navigation was fastest in stereo graphics. • Rate control was fastest in monoscopic graphics; • Position was fastest for stereo graphics.

  34. Testbed Evaluation of Universal VR Tasks • Testbeds are a way to deal with evaluation complexities. • They are composed of a small number of “universal” tasks such as travel in a virtual environment, object selection and object manipulation; • Provide a structured way to model subject performance, although the evaluation is more expensive to do. • Testbeds make possible to predict subject’s performance in applications that include the tasks, sub-tasks and interaction techniques they use.

  35. Testbed Evaluation of Universal VR Tasks - continued • Testbed evaluation of navigation tasks: obstacles (trees and fences) and targets (flags) can be randomly placed. • There were 38 subjects divided in 7 groups, each using a different Navigation technique (steering based, manipulation-based and target specification techniques)

  36. Testbed Evaluation of Universal VR Tasks - continued • Steering-based: Pointing, gaze tracking or torso tracking; • Manipulation-based: HOMER or Go-Go; In go-go the subject stretches his hand into the virtual world, grasps an object and then pulls the virtual camera forward; • Target-specification: ray casting or dragging. • Fastest – gaze-directed (but produced eye strain and nausea)

  37. Testbed Evaluation of Universal VR Tasks - continued • Testbeds used for object selection and placement tasks; • Subjects had to select a highlighted cube and place it in a target area (between the two gray cubes);

  38. Testbed Evaluation of Universal VR Tasks - continued • There were 48 subjects divided among 9 groups. Object selection was done either by ray casting or occlusion. Scene was seen on HMD; • For each subject the distance to the object, the DOF used for box Manipulation (2 or 6) or ratio of object/target size (1.5x, 3.75x) varied. • Distant objects were harder to select, Go-Go was slowest mode.

  39. Influence of System Responsiveness on User Performance • System responsivenessinverse proportional to the time between user input and the simulation response to that input. • HF studies done at Rutgers in early 90s to determine influence of refresh rate (fps) and graphics mode (mono/stereo) on tracking task performance in VR; • Subjects were 48 male and 48 female (volunteer undergrad students), right handed. Task was the capture of a bouncing ball in the smallest amount of time; • Subjects were divided in sub-groups, each having a different refresh rate, and graphics mode; • Each subject performed 12 trials separated by 15 seconds rest periods; Ball appeared with random velocity direction and maintained a speed of 25 cm/sec

  40. Influence of System Responsiveness on User Performance

  41. Influence of System Responsiveness on User Performance • Ball capturing time was influence sharply by the graphics refresh rate, especially when the rate was below 14 fps; • The standard deviation grew with the decrease in fps, indicating less uniformity among the subjects in the experimental groups; • Stereo made a bog difference for low refresh rates, where task completion time was approximately 50% of the time taken to complete the task under monoscopic graphics; • the subjects had different strategies for grasping the ball, at low refresh rates, where the ball motion appeared saccadic, they grasped in a corner, keeping their arm stationary, while at high refresh rates they moved theirs hand ballisticly to capture it.

  42. Mono graphics Stereo graphics Influence of System Responsiveness on User Performance Mean completion time (sec) Frames per second (fps) Effect of frame rate and graphics mode on task completion time (Richard et al., 1995)

  43. Mono graphics completion time (sec) Trial number Influence of System Responsiveness on User Learning • The frame refresh rate had a significan influence on the way subjects learned; • The group with highest task learning was that corresponding to monoscopic graphics displayed at 1 fps.

  44. Stereo graphics completion time (sec) Trial number Influence of System Responsiveness on User Learning • The least learning was for the groups with high refresh rates (14 fps and 28 fps). Their curves were almost flat; • Stereo had a beneficial effect on learning (subjects were more familiar to the task – it was presented more realistically to them).

  45. Influence of System Responsiveness on Object Placement tasks • Watson performed a test to determine the influence of system responsiveness and its variability (expresses as Standard Deviation of System Responsiveness) on object placement tasks. • The task was to capture an object and place them on a pedestal, while receiving monoscopic graphics feedback; • System responsiveness was altered by changing the frame refresh rates to 17fps, 25fps and 33 fps. For each frame rate, the SDSR was changed from 5.6%, 22.2% and 44.4%; • Results showed that subject performance (expressed as placement time and accuracy) was effected by both SR and SDSR. • The variability in system responsiveness had the largest influence on placement tasks done at low refresh rates. The worst was placement done at 17 fps, with 44.4% SDSR. • When done at 33 fps and 5.6% SDSR accuracy improved 90%.

  46. Influence of System Responsiveness on Object Placement tasks

  47. Influence of Feedback Multi-modality • HF studies done at University of Birmingham in late 90s to determine influence of force feedback mode on task completion time in VR; • Task was the manipulation of disks to construct the “Tower of Hanoi”. • Four conditions – non-immersive VR with 2-D mouse, immersive (HMD) with 3-D mouse, immersive with instrumented objects, and real objects; • Use of “instrumented objects” (disks with a tracker attached) to provide force feedback – augmented VR • Subjects were four male with six-months experience in VR each; • Each subject performed 10 trials for each condition, conditions were randomized.

  48. 1 2 3 4 5 6 7 Influence of Feedback Multi-modality Problem – Stack three rings on another pole; Larger ring never on top of smaller one Tower of Hanoi task

  49. experimental setup (IO condition) Virtual scene during experiments Influence of Feedback Multi-modality 3-D manipulation task – Tower of Hanoi (Boud et al., 2000)

  50. Task completion time (sec) experimental condition Influence of Feedback Multi-modality Tower of Hanoi performance (Boud et al., 2000)

More Related