1 / 31

Applying Science Towards Understanding Behavior in Organizations

Applying Science Towards Understanding Behavior in Organizations. Chapters 2 & 3. Approaches to collecting data Experimental Observational/correlational Data collection issues Sampling How should we select participants? What impact does it have on the results? Experimental design

maddy
Download Presentation

Applying Science Towards Understanding Behavior in Organizations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Applying Science Towards Understanding Behavior in Organizations Chapters 2 & 3

  2. Approaches to collecting data Experimental Observational/correlational Data collection issues Sampling How should we select participants? What impact does it have on the results? Experimental design Controlling potential confounds Assigning participants to experimental conditions Measurement issues Describing and interpreting the results Research Issues in Organizations

  3. Experiments - Do changes in one variable (X) “cause” changes in another variable (Y)? Independent Variable (X) condition or event that is manipulated by experimenter Dependent Variable (Y) variable that is affected (hopefully) by manipulating independent variable Extraneous Variable(s) any variable other than independent variable that may influence dependent variable Experiments: A Review

  4. Advantage: Allows conclusions about direct effects of one variable on another Disadvantages: Experimental conditions are artificial results may not “generalize” to the real world Some questions can’t be tested in an experiment Require control that is not always available in the “real” world Experiments: Pros and Cons

  5. Experimental Design • Controlling potential confounds • Goal of experiment is to “rule out” alternate explanations of what affected dependent variable • Confounds are threats to internal validity • Can be controlled through appropriate experimental design and procedures

  6. Internal Validity History Maturation Testing Instrumentation Statistical Regression Selection Mortality Selection-Maturation Diffusion of Treatment External Validity Sample Setting (e.g., culture) Time (e.g., 60s vs. 90s) Replication (lack of) Validity Do the results of this experiment generalize (apply) to settings other than the experiment Is there another reason (other than the independent variable) that could explain the results of the experiment.

  7. Sampling • How participants are selected for a study influences the extent to which the results can be applied to a larger group (external validity). • A wide variety of techniques are available • Two Main types of sampling • Probability • predetermined chance of any individual in the population being selected for the study • Nonprobability • Typically nonrandom sampling

  8. Probability Sampling Simple random sampling Systematic sampling Stratified random sampling Cluster sampling Multistage sampling Nonprobability Sampling Convenience sampling Quota sampling Snowball sampling Sampling Techniques

  9. Post with no Control Group Training Posttest

  10. Pre – Post with no Control Group Pretest Training Posttest

  11. Control Group with no Pretest Experimental Group Training Posttest Group Differences Control Group Placebo Posttest

  12. Pre – Post with Control Group Pretest Experimental Training Posttest Group Differences Group Differences Pretest Control Posttest

  13. Measurement • Measurement – the process of assigning numbers to objects or events according to rules (Linn & Gronlund, 1995). • Psychological Measurement – concerned with evaluating individual differences in psychological traits. • Trait – descriptive label applied to a group of behaviors (e.g., friendly; intelligent)

  14. Utilizing Individual Differences • Psychologists assume that most traits are normally distributed in the population. • e.g., height, intelligence, KSAs • Psychologists study: • measuring these differences • using these differences to predict performance • I/O Psychologists typically primarily rely on these as predictors of job performance: • Cognitive abilities • Personality

  15. Types of Assessment • Biographical Information • Interviews • Work Samples • Letters of Recommendation • Psychological Tests

  16. Biographical Data • Good questions are about events that are: • historical • external • discrete • controllable (by the individual) • verifiable • equal access • job relevant • non-invasive (Mael, 1991) • Rationale vs. empirical method

  17. Biographical Data • Strong criterion validity • drug use, criminal history predicts dysfunctional police behavior (Sarchione et al., 1998) • not redundant with personality (McManus & Kelly, 1999) • Measurement issues • Generalizability • Faking • Fairness • Privacy concerns

  18. Interviews • Structured vs. Unstructured • Info. gathering vs. interpersonal behavior sample • Situational interview • “How would you handle a circumstance in which you needed the help of a person you did not like?” • Measurement issues • structured has more criterion related validity • value of unstructured? • Illusion of validity • Guidelines for structured interviews

  19. Work Samples • perform a task under standardized conditions • historically were for blue collar jobs • e.g. use of tools, demonstrate driving skills • white collar examples • speech interview for foreign worker, test of basic chemistry knowledge, • Measurement issues • high criterion validity if skills are similar to job • costly to administer • work best with mechanical, rather than people-oriented tasks

  20. Assessment Centers • Realistic tasks done in groups • Assessed by multiple of raters rating multiple domains • Multiple methods • in basket group exercise • leaderless group exercise • Strong criterion validity (e.g., teachers, police) • overall scores predict job performance • Measurement issues • costly to administer • different ratings on a task too highly correlated • dimension ratings not correlated strongly across tasks • fix? focus on behavior checklists and rater training

  21. Drug Testing • opinion? • People are more accepting of it if job involves risks to others (Paronto, et al., 2002) • Measurement issues • reliability is very high, but not perfect • Validity? • Normands, Salyards, & Mahoney (1990) • over 5000 postal service applicants • those who tested positive had 59% higher absenteeism, 47% more likely to be fired • no differences in injury or accidents

  22. Letters of Recommendation • ever written a letter of recommendation for someone? • worst criterion validity of all commonly used assessment tools • some use for screening extremely bad candidates • Measurement issues • restriction of range • writer bias/investment

  23. Psychological Test Characteristics • Group vs. individual • Objective vs. open-ended • Paper & pencil vs. performance • Power vs. speed

  24. Psychological Test Types • Ability Tests • Cognitive ability • Psychomotor ability • Knowledge and skill or achievement • Integrity • Personality • Emotional Intelligence • Vocational interest

  25. Integrity Tests • Designed to predict whether employee will engage in counterproductive work behavior (CWB) • overt vs. personality (covert) • Better at predicting general CWB and performance than theft (r = .30 -.40) • Measurement issues • difficult to measure criteria! • proprietary issues • legal and privacy issues • faking

  26. Personality Tests • measures predispositions toward particular feelings and behaviors • not all tests are based on past research • many have shown incremental validity • e.g., predict when controlling for IQ • Measurement issues • job relevance • not easily/often faked or a problem if faked (e.g., job faking too)

  27. The Big Five Inventory • Openness • Highs: imaginative, creative, and to seek out cultural and educational experiences. • Lows: more down-to-earth, less interest in art & more practical. • Conscientiousness • Highs: methodical, well organized and dutiful. • Lows: less careful, less focused & more likely to be distracted • Extraversion • Highs: energetic and seek out the company of others. • Lows (introverts): tend to be more quiet and reserved. • Agreeableness • Highs: tend to be trusting, friendly and cooperative. • Lows: tend to be more aggressive and less cooperative • Neuroticism • Highs: prone to insecurity and emotional distress. • Lows: more relaxed, less emotional and less prone to distress.

  28. Cognitive Tests • Have greatest validity • Often very easy and inexpensive to use • Wonderlic Personnel Test • 50 items • 12 minute time limit • Sample questions • Interpreting scores? • Scores vary as a function of race and ethnicity • Ethical issues? • Face validity?

  29. Psychological Test Characteristics • Group vs. individual • Objective vs. open-ended • Paper & pencil vs. performance • Power vs. speed

  30. Reliability and Validity • Reliability • Test-retest • Parallel (Alternate) forms • Internal Consistency • Validity • Face • Content • Criterion-related • Construct-related

More Related