html5-img
1 / 70

HCI 510 : HCI Methods I

HCI 510 : HCI Methods I. HCI Methods Controlled Experiments. HCI Methods. Controlled Experiments Introduction Participants Ethical Concerns Design (Hypothesis) Design (Variables) Design (Confounding) Design (Within / Between). HCI Methods. Controlled Experiments Introduction.

Download Presentation

HCI 510 : HCI Methods I

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HCI 510 : HCI Methods I HCI Methods Controlled Experiments

  2. HCI Methods Controlled Experiments Introduction Participants Ethical Concerns Design (Hypothesis) Design (Variables) Design (Confounding) Design (Within / Between)

  3. HCI Methods Controlled Experiments Introduction

  4. Controlled Experiments Lets start by watching an experiment …

  5. Controlled Experiments Controlled experiments, an approach that has been adopted from research methods in Psychology, feature large in the arsenal of HCI research methods. Controlled experiments are a widely used approach to evaluating interfaces and styles of interaction, and to understanding cognition in the context of interactions with systems. BLANDFORD, A., COX, A. L. & CAIRNS, P. A. (2008) Controlled Experiments. In Cairns, P.A., & Cox, A.L. (eds.) Research Methods for Human Computer Interaction. CUP. 1-16.

  6. Controlled Experiments The question they most commonly answer can be framed as: does making a change to the value of variable X have a significant effect on the value of variable Y? For example X might be an interface or interaction feature, and Y might be time to complete task, number of errors or users’ subjective satisfaction from working with the interface. BLANDFORD, A., COX, A. L. & CAIRNS, P. A. (2008) Controlled Experiments. In Cairns, P.A., & Cox, A.L. (eds.) Research Methods for Human Computer Interaction. CUP. 1-16.

  7. Controlled Experiments Controlled experiments are more widely used in HCI research than in practice, where the costs of designing and running a rigorous experiment typically outweigh the benefits. The purpose of this lecture is to outline matters that need to be considered when designing experiments to answer questions in HCI. BLANDFORD, A., COX, A. L. & CAIRNS, P. A. (2008) Controlled Experiments. In Cairns, P.A., & Cox, A.L. (eds.) Research Methods for Human Computer Interaction. CUP. 1-16.

  8. Controlled Experiments Controlled experiments are more widely used in HCI research than in practice, where the costs of designing and running a rigorous experiment typically outweigh the benefits. The purpose of this lecture is to outline matters that need to be considered when designing experiments to answer questions in HCI. BLANDFORD, A., COX, A. L. & CAIRNS, P. A. (2008) Controlled Experiments. In Cairns, P.A., & Cox, A.L. (eds.) Research Methods for Human Computer Interaction. CUP. 1-16.

  9. Controlled Experiments • Quantitative Evaluation of Systems • Quantitative: • precise measurement, numerical values • bounds on how correct our statements are • Methods • user performance data collection • controlled experiments

  10. Collecting user performance data Data collected on system use (often lots of data) • Controlled Experiments

  11. Collecting user performance data Data collected on system use (often lots of data) Exploratory: hope something interesting shows up but difficult to analyze • Controlled Experiments

  12. Collecting user performance data Data collected on system use (often lots of data) Exploratory: hope something interesting shows up but difficult to analyze Targeted look for specific information, but may miss something frequency of request for on-line assistance what did people ask for help with? frequency of use of different parts of the system why are parts of system unused? number of errors and where they occurred why does an error occur repeatedly? time it takes to complete some operation what tasks take longer than expected? • Controlled Experiments

  13. Controlled experiments Traditional scientific method Reductionist clear convincing result on specific issues In HCI: insights into cognitive process, human performance limitations ... allows system comparison, fine-tuning of details ... • Controlled Experiments

  14. Controlled experiments Strives for lucid and testable hypothesis quantitative measurement measure of confidence in results obtained (statistics) replicability of experiment control of variables and conditions removal of experimenter bias • Controlled Experiments

  15. HCI Methods Controlled Experiments Participants

  16. Participants For any experiment, it is necessary to consider what the appropriate user population is. For example, if the experiment is designed to test the effect of a changed display structure for a specialist task, for instance, a new air traffic control system, it is important to recruit participants who are familiar with that task, namely experienced air traffic controllers. • Controlled Experiments

  17. Participants Similarly, if the concern is with an interface for older users, it is important to recruit such users to the study. Ideally, for any experiment, a representative sample of the user population is recruited as participants; pragmatically, this is not always feasible (also, it is so much easier to recruit friends, students or members of a psychology department participants panel). If a non-representative sample of users is involved in the study then the consequences of this for the findings should be carefully considered. For example, how meaningful is it to have run an experiment on an interface intended for air traffic controllers with undergraduate psychology students? Probably not at all. • Controlled Experiments

  18. Participants Having decided on the user population, decisions need to be made on how many participants to recruit, depending on factors such as the power of the statistical tests to be used, the time available for the study, the ease of recruiting participants, funds or other incentives available as participant rewards and so on. Participants can then be recruited through direct approach or by advertising in suitable places. • Controlled Experiments

  19. Participants - Summary ways of controlling subject variability reasonable amount of subjects random assignment make different user groups an independent variable screen for anomalies in subject group superstars versus poor performers • Controlled Experiments Novice Expert

  20. Controlled Experiments Participants and Conformity…

  21. HCI Methods Controlled Experiments Ethical Concerns

  22. Ethical Concerns Although not usually reported explicitly, one important consideration is the ethical dimensions of any study. Most professional bodies (e.g. BPS, 2006) publish codes of practice. Less formally, Blandford et al (forthcoming) have proposed that the three important elements of ethical consideration can be summarised by the mnemonic ‘VIP’: • Vulnerable participants • Informed consent • Privacy, confidentiality and maintaining trust • Controlled Experiments

  23. Ethical Concerns – Vulnerable Participants Examples of vulnerable participants will include obviously vulnerable groups (such as the young, old or infirm), but may also include less obvious people such as those with whom the investigator has a power relationship (e.g. students may feel obligated to participate in a study for their professor), or who otherwise feel unable to refuse to participate for any reason, or who might feel upset or threatened by some aspect of the study. Some concerns can be addressed simply by making it very clear to participants that it is the system that is being assessed and not them. • Controlled Experiments

  24. Ethical Concerns – Informed Consent It is now recognised as good practice to ensure all participants in any study are informed of the purpose of the study and of what will be done with the data. In particular, the data should normally be made as anonymous as possible (e.g. by using codes in place of names), and individuals’ privacy and confidentiality need to be respected. It is now common practice to provide a (short) written information sheet about the experiment, and to have a consent form on which participants can indicate that they understand what is expected of them, that they are participating voluntarily, and that they are free to withdraw at any time without giving a reason. This is informed consent – a person agrees to take part knowing what they are getting into. • Controlled Experiments

  25. Ethical Concerns – Privacy Confidentiality and Trust Usually, it is possible to offer participants the opportunity to talk about the experiment in a debriefing session after they have finished the tasks they were set. Not only does this help to make the participants feel valued but sometimes it can be a source of informal feedback that can lead to a better design of experiment or even new ideas for experiments. All data should be stored in accordance with legislation; for example, in the UK, the Data Protection Act specifies what information can be held and for what reasons, and it is necessary to register with the government if data is being stored on individuals that allows them to be identified. • Controlled Experiments

  26. Controlled Experiments

  27. Controlled Experiments Lets watching something on ethics …

  28. HCI Methods Controlled Experiments Design (Hypothesis)

  29. Design - Hypothesis A controlled experiment tests a hypothesis – typically about the effects of a designed change on some measurable performance indicator. For example, a hypothesis could be that a particular combination of speech and keypress input will greatly enhance the speed and accuracy of people sending text messages on their mobile. • Controlled Experiments

  30. Design - Hypothesis A controlled experiment tests a hypothesis – typically about the effects of a designed change on some measurable performance indicator. For example, a hypothesis could be that a particular combination of speech and keypress input will greatly enhance the speed and accuracy of people sending text messages on their mobile. The aim of a classical experiment is, formally, to fail to prove the null hypothesis. That is, for the texting example, you should design an experiment which in all fairness ought not to make any difference to the speed and accuracy of texting. The assumption that there will be no difference between designs is the null hypothesis. By failing to show this, you Provide evidence that actually the design is having an effect in the way that you predicted it would. • Controlled Experiments

  31. Design - Hypothesis Put more generally: the study is designed to show that the intervention has no effect, within the bounds of probability. It is by failing to prove that the intervention has had no effect – that the probability of getting this result if the intervention has no effect is very small indeed – that one is led to the conclusion that the intervention did indeed have an effect. More formally, the failure to prove the null hypothesis Provides evidence that there is a causal relationship between the independent and dependent variables. • Controlled Experiments

  32. Controlled Experiments So lets redefine hypothesis (first 2 mins) …

  33. HCI Methods Controlled Experiments Design (Variables)

  34. HCI Methods Controlled Experiments Design (Variables) Independent ? Dependant ?

  35. Design - Variables In an HCI context, the changes to be made might be to interaction design, interface features, participant knowledge, and so on. The variable that is intentionally varied is referred to as the independent variable and that which is measured is the dependent variable. One way to try to remember which way round these are is to think that the value of the dependent variable dependson the value of the independent variable. There may be multiple dependent variables (e.g. time to complete task, error rate) within one experiment, but – at least for simple experiments – there should normally only be one independentvariable. • Controlled Experiments

  36. Design - Variables The hypothesis includes the independent variables that are to be altered the things you manipulate independent of a subject’s behaviour determines a modification to the conditions the subjects undergo may arise from subjects being classified into different groups • Controlled Experiments

  37. Design - Variables The Hypothesis also includes the dependent variablesthat will be measured variables dependent on the subject’s behaviour/ reaction to the independent variable the specific things you set out to quantitatively measure / observe • Controlled Experiments

  38. Design - Variables For instance: if you were measuring the growth rate of plants under full sunlight for 8 hours a day versus plants that only have 4 hours of full sunlight per day, the amount of time per day of full sunlight would be the independent variable - the variable that you control. The growth rate of the plants would be a dependent variable. • Controlled Experiments

  39. Design - Variables For instance: if you were measuring the growth rate of plants under full sunlight for 8 hours a day versus plants that only have 4 hours of full sunlight per day, the amount of time per day of full sunlight would be the independent variable - the variable that you control. The growth rate of the plants would be a dependent variable. A dependent variable? Yes, there can be more than one dependent variable. In our example, for instance, the growth rate of the plants might be one dependent variable and the overall height of the plants might be another dependent variable. Both of these variables depend upon the independent variable. • Controlled Experiments

  40. Design - Example State a lucid, testable hypothesis - this is a precise problem statement Example 1: There is no difference in the number of cavities in children and teenagers using crest and no-teeth toothpaste when brushing daily over a one month period • Controlled Experiments

  41. Design - Example State a lucid, testable hypothesis - this is a precise problem statement Example 1: There is no difference in the number of cavities in children and teenagers using crest and no-teeth toothpaste when brushing daily over a one month period Independent Variables ? • Controlled Experiments

  42. Design - Example State a lucid, testable hypothesis - this is a precise problem statement Example 1: There is no difference in the number of cavities in children and teenagers using crest and no-teeth toothpaste when brushing daily over a one month period Independent Variables ? toothpaste type: uses Crest or No-teeth toothpaste age: <= 11 years or > 11 years • Controlled Experiments

  43. Design - Example State a lucid, testable hypothesis - this is a precise problem statement Example 1: There is no difference in the number of cavities in children and teenagers using crest and no-teeth toothpaste when brushing daily over a one month period Dependent Variables ? • Controlled Experiments

  44. Design - Example State a lucid, testable hypothesis - this is a precise problem statement Example 1: There is no difference in the number of cavities in children and teenagers using crest and no-teeth toothpaste when brushing daily over a one month period Dependent Variables ? number of cavities frequency of brushing preference • Controlled Experiments

  45. Design - Example State a lucid, testable hypothesis - this is a precise problem statement Example 2: There is no difference in user performance (time and error rate) when selecting a single item from a pop-up or a pull down menu of 4 items, regardless of the subject’s previous expertise in using a mouse or using the different menu types” • Controlled Experiments

  46. Design - Example State a lucid, testable hypothesis - this is a precise problem statement Example 2: There is no difference in user performance (time and error rate) when selecting a single item from a pop-up or a pull down menu of 4 items, regardless of the subject’s previous expertise in using a mouse or using the different menu types” Independent Variables ? • Controlled Experiments

  47. Design - Example State a lucid, testable hypothesis - this is a precise problem statement Example 2: There is no difference in user performance (time and error rate) when selecting a single item from a pop-up or a pull down menu of 4 items, regardless of the subject’s previous expertise in using a mouse or using the different menu types” Independent Variables ? menu type: pop-up or pull-down menu length: 3, 6, 9, 12, 15 subject type (expert or novice) • Controlled Experiments

  48. Design - Example State a lucid, testable hypothesis - this is a precise problem statement Example 2: There is no difference in user performance (time and error rate) when selecting a single item from a pop-up or a pull down menu of 4 items, regardless of the subject’s previous expertise in using a mouse or using the different menu types” Dependent Variables ? • Controlled Experiments

  49. Design - Example State a lucid, testable hypothesis - this is a precise problem statement Example 2: There is no difference in user performance (time and error rate) when selecting a single item from a pop-up or a pull down menu of 4 items, regardless of the subject’s previous expertise in using a mouse or using the different menu types” Dependent Variables ? time to select an item selection errors made time to learn to use it to proficiency • Controlled Experiments

  50. Controlled Experiments Design (Independent and Dependant Variables) …

More Related