1 / 65

IS 4800 Empirical Research Methods for Information Science Class Notes Feb. 29, 2012

IS 4800 Empirical Research Methods for Information Science Class Notes Feb. 29, 2012. Instructor: Prof. Carole Hafner, 446 WVH hafner@ccs.neu.edu Tel: 617-373-5116 Course Web site: www.ccs.neu.edu/course/is4800sp12/. Oral Presentation of Study Results. Presenting your research.

dillion
Download Presentation

IS 4800 Empirical Research Methods for Information Science Class Notes Feb. 29, 2012

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IS 4800 Empirical Research Methods for Information Science Class Notes Feb. 29, 2012 Instructor: Prof. Carole Hafner, 446 WVH hafner@ccs.neu.edu Tel: 617-373-5116 Course Web site: www.ccs.neu.edu/course/is4800sp12/

  2. Oral Presentation of Study Results

  3. Presenting your research • A research project should “tell a story” • A brief presentation (20 min or less) is more difficult than a longer one – selectivity is critical • The T-shaped talk and the U-shaped talk • Rehearse for timing

  4. Main concepts and ideas Do not go into great detail on experimental methods – BUT enough so people understand what you did Focus on motivation, results, implications If listener wants details they can read the paper or ask questions Oral Presentation

  5. Oral Presentation Don’t do this…

  6. WEEK 1 WEEK 4 GOAL GOAL TASK TASK BOND BOND COMPOSITE COMPOSITE Oral PresentationDo use figures a lot

  7. Visuals should be exhibits that you talk about Do not put lots of text on charts Do not read your charts for your presentation Use interactivity, video, images to keep your audience awake Oral PresentationGuide for Visuals

  8. How did you evaluate that? How did you measure that? How did you control for extraneous variable X? Why didn’t you use statistic Y? Isn’t that a biased sample? What was your control group? How did you do study procedure Z? Common Questions

  9. Describe your sample Minimal demographics – number of subjects, broken down by gender Better: age, occupation, major, year Minimize text on your charts If you use a novel measure (e.g., new survey) you must give details on the measure Examples of most important actual questions asked Any reliability/validity/psychometrics done If you do interviews, include actual quotes Build from data to conclusions Practice your timing/delivery with your project team Tips

  10. Objectives (also critiques) Describe what your study is about – should also tell a story, and maybe a more complex one Motivate your study Assure reader you have conducted a sound study Research Methods – often presented in small font Present results in an objective manner Discuss implications Discuss future work Enable replication Written Study Reports

  11. Typical Academic IS/CS/HCI Paper Structure • Astract • Introduction • Motivation • Related work • Hypotheses • Method • Results • Discussion • Limitations • Implications • Future work • References

  12. Abstract (?) Executive Summary and Recommendations Introduction Motivation Related work System design Typical Design/Development Study • Evaluation • Hypotheses • Method • Results • Discussion – summary, limitations • Conclusion • Implications • Future work • References

  13. Concise summary (one paragraph!) Abstract for an empirical study should include Information on the problem under study The nature of the subject sample A description of methods, equipment, and procedures A statement of the results A statement of the findings or conclusions drawn Often the last thing you write The Abstract

  14. Part of paper giving justification for study Usually has the following information Introduction to the topic under study Brief review of research and theory related to the topic A statement of the problem to be addressed A statement of the purpose of the research A brief description of the research strategy A description of predictions and hypotheses CS/IS papers often put Related Work as a separate section after Introduction For each, describe how your work is different The Introduction

  15. Organization of the Introduction: General to Specific

  16. Includes information on exactly how a study was carried out Subsections Participants or subjects Describe in detail the participant or subject sample Human participants go in a Participants subsection, and animal subjects in a Subjects subsection Apparatus or materials Describe in detail any equipment or materials used Equipment is usually described in an Apparatus subsection and written materials in a Materials subsection The Method Section

  17. Procedure Describe Exactly how the study was carried out The conditions to which subjects were exposed or under which observed The behaviors measured and how they were scored When and where observations were made Debriefing procedures Enough detail should be included in all sections so that the study could be replicated The Method Section

  18. Objective, dry, boring – just the facts All relevant analyses are reported in the results section Do not present raw data Data should be reported in summary form Descriptive statistics Inferential statistics Results of descriptive and inferential statistics must be presented in narrative format Describe the source of any unconventional statistical tests The Results Section

  19. Commonly Used Statistical Citations

  20. Abbreviations for Statistical Notation

  21. This is where you can take some liberties with describing what the results mean Results are interpreted, conclusions drawn, and findings are related to previous research Section begins with a brief restatement of hypotheses Next, indicate if hypotheses were confirmed The rest of the section is dedicated to integrating findings with previous research It is fine to speculate, but speculations should not stray far from the data The Discussion Section

  22. Organization of Discussion: Specific to General

  23. Example

  24. Liberally cite previous & related work. If you copy passages you must cite and, depending on length, format to indicate it is copied. Suggest using EndNote, BibTex or similar. Citations

  25. Report all of your findings (not just the ones you like) Adhere to your original plan Report any deviations and why Power analysis, statistics, measures Do not drop subjects or data points without rigorous justification If your hypothesis test was not significant you cannot say anything about difference in means (example). If you did not do an experiment, attempting to control for extraneous variables, you cannot mention or imply causality. Ethical Issues

  26. Introduction to Usability Testing • I. Summative evaluation: Measure/compare user performance and satisfaction • Quantitative measures • Statistical methods • II. Formative Evaluation: Identify Usability Problems • Quantitative and Qualitative measures • Ethnographic methods such as interviews, focus groups

  27. Usability Goals (Nielsen) Learnability Efficiency Memorability Error avoidance/recovery User satisfaction Operationalize these goals to evaluate usability

  28. What is a Usability Experiment? • Usability testing in a controlled environment • There is a test set of users • They perform pre-specified tasks • Data is collected (quantitative and qualitative) • Take mean and/or median value of measured attributes • Compare to goal or another system • Contrasted with “expert review” and “field study” evaluation methodologies • The growth of usability groups and usability laboratories

  29. Experimental factors Subjects representative sufficient sample Variables independent variable (IV) characteristicchanged to produce different conditions. e.g. interface style, number of menu items. dependent variable (DV) characteristicsmeasured in the experiment e.g. timetaken, number of errors.

  30. Experimental factors (cont.) Hypothesis prediction of outcome framed in terms ofIV and DV null hypothesis: states no differencebetween conditions aim is to disprovethis. Experimental design within groups design each subjectperforms experiment under each condition. transfer of learning possible less costlyand less likely to suffer from user variation. between groups design each subjectperforms under only one condition notransfer of learning more users required variation can bias results.

  31. Summative AnalysisWhat to measure? (and it’s relationship to a usability goal) Total task time User “think time” (dead time??) Time spent not moving toward goal Ratio of successful actions/errors Commands used/not used frequency of user expression of: confusion, frustration, satisfaction frequency of reference to manuals/help system percent of time such reference provided the needed answer

  32. Measuring User Performance Measuring learnability Time to complete a set of tasks Learnability/efficiency trade-off Measuring efficiency Time to complete a set of tasks How to define and locate “experienced” users Measuring memorability The most difficult, since “casual” users are hard to find for experiments Memory quizzes may be misleading

  33. Measuring User Performance (cont.) Measuring user satisfaction Likert scale (agree or disagree) Semantic differential scale Physiological measure of stress Measuring errors Classification of minor v. serious

  34. Reliability and Validity Reliability means repeatability. Statistical significance is a measure of reliability Validity means will the results transfer into a real-life situation. It depends on matching the users, task, environment Reliability - difficult to achieve because of high variability in individual user performance

  35. Formative EvaluationWhat is a Usability Problem?? Unclear - the planned method for using the system is not readily understood or remembered (info. design level) Error-prone - the design leads users to stray from the correct operation of the system (any design level) Mechanism overhead - the mechanism design creates awkward work flow patterns that slow down or distract users. Environment clash - the design of the system does not fit well with the users’ overall work processes. (any design level) Ex: incomplete transaction cannot be saved

  36. Qualitative methods for collecting usability problems Thinking aloud studies Difficult to conduct Experimenter prompting, non-directive Alternatives: constructive interaction, coaching method, retrospective testing Output: notes on what users did and expressed: goals, confusions or misunderstandings, errors, reactions expressed Questionnaires Should be usability-tested beforehand Focus groups, interviews

  37. Observational Methods - Think Aloud user observed performing task user asked to describe what he is doing andwhy, what he thinks is happening etc. Advantages simplicity - requires little expertise can provide useful insight can show how system is actually use Disadvantages subjective selective act of describing may alter taskperformance

  38. Observational Methods - Cooperative evaluation variation on thinkaloud user collaborates in evaluation both user and evaluator can ask each otherquestions throughout Additional advantages less constrained and easier to use user is encouraged to criticize system clarification possible

More Related