1 / 30

Research Methodology

Research Methodology. Lecture No : 11 (Goodness Of Measures). Recap. Measurement is the process of assigning numbers or labels to objects, persons, states of nature, or events.

sharrison
Download Presentation

Research Methodology

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Research Methodology Lecture No : 11 (Goodness Of Measures)

  2. Recap • Measurement is the process of assigning numbers or labels to objects, persons, states of nature, or events. • Scales are a set of symbols or numbers, assigned by rule to individuals, their behaviors, or attributes associated with them

  3. Using these scales we complete the development of our instrument. • It is to bee seen if these instruments accurately and measure the concept.

  4. Sources of Measurement Differences Why do ‘scores’ vary? Among the reasons legitimate differences, are differences due to error (systematic or random) 1. That there is a true difference in what is being measured. 2. That there are differences in stable characteristics of individual respondents • On satisfaction measures, there are systematic differences in response based on the age of the respondent.

  5. 3.Differences due to short term personal factors – mood swings, fatigue, time constraints, or other transistory factors. Example – telephone survey of same person, difference may be due to these factors (tired versus refreshed) may cause differences in measurement. 4.Differences due to situational factors – calling when someone may be distracted by something versus full attention.

  6. 5.Differences resulting from variations in administering the survey – voice inflection, non verbal communication, etc. • Differences due to the sampling of items included in the questionnaire.

  7. Differences due to a lack of clarity in measurement instrument (measurement instrument error). Example; unclear or ambiguous questions. 8. Differences due to mechanical or instrument factors – blurred questionnaires, bad phone connections.

  8. Goodness of Measure • Once we have operationalized, and assigned scales we want to make sure that these instruments developed measure the concept accurately and appropriately. • Measure what is suppose to be measured • Measure as well as possible

  9. Validity : checks as to how well an instrument that is developed measured the concept • Reliability: checks how consistently an instrument measures

  10. Ways to Check for Reliability How to check for reliability of measurement instruments or the stability of measures and internal consistency of measures? Two methods are discussed to check the stability . • Stability (a) Test – Retest • Use the same instrument, administer the test shortly after the first time, taking measurement in as close to the original conditions as possible, to the same participants.

  11. If there are few differences in scores between the two tests, then the instrument is stable. The instrument has shown test-retest reliability. • Problems with this approach. • Difficult to get cooperation a second time • Respondents may have learned from the first test, and thus responses are altered • Other factors may be present to alter results (environment, etc.)

  12. (b) Equivalent Form Reliability • This approach attempts to overcome some of the problems associated with the test-retest measurement of reliability. • Two questionnaires, designed to measure the same thing, are administered to the same group on two separate occasions (recommended interval is two weeks).

  13. If the scores obtained from these tests are correlated, then the instruments have equivalent form reliability. • Tough to create two distinct forms that are equivalent. • An impractical method (as with test-retest) and not used often in applied research.

  14. (2)Internal Consistency Reliability This is a test of the consistency of respondents answer to all the items in a measure . The items should ‘hang together as a set. i.e. the items are independent measures of the same concept, they will correlated with one another

  15. Developing questions on the Concept Enriched Job

  16. Validity • Definition: Whether what was intended to be measured was actually measured?

  17. Face Validity • The weakest form of validity • Researcher simply looks at the measurement instrument and concludes that it will measure what is intended. • Thus it is by definition subjective.

  18. Content Validity • The degree to which the instrument items represent the universe of the concepts under study. • In English: did the measurement instrument cover all aspects of the topic at hand?

  19. Criterion Related Validity • The degree to which the measurement instrument can predict a variable known as the criterion variable.

  20. Two subcategories of criterion related validity • Predictive Validity • Is the ability of the test or measure to differentiate among individuals with reference to a future criterion. • E.g. an instrument which is suppose to measure the aptitude of an individual, when used can be compared with the future job performance of a different individual. Good performance (Actual) should also have scored high in the aptitude test and vise versa

  21. Concurrent Validity • Is established when the scale discriminates individuals who are known to be different that is they should score differently on the test. • E.g. individuals who are happy at availing welfare and individuals who prefer to do job must score differently on a scale/ instrument which measures work ethics.

  22. Construct Validity • Does the measurement conform to some underlying theoretical expectations. If so then the measure has construct validity. • i.e. If we are measuring consumer attitudes about product purchases then do the measure adhere to the constructs of consumer behavior theory. • This is the territory of academic researchers

  23. Two approaches are used to measure construct validity • Convergent Validity • A high degree of correlation among 2 different measures intended to measure same construct • Discriminant Validity • The degree of low correlation among varaibles that are assumed to be different.

  24. To check validity through Correlation analysis, Factor Analysis, Multi trait , Multi matrix correlation etc

  25. Reflective vs Formative measure scales: • In some multi item measure where it is measuring different dimensions of a concept do not hang together • Such is the case of Job Description Index measure which measures job satisfaction from 5 different dimension i.e Regular Promotions, Fairly good chance for promotion, Income adequate, Highly Paid, good opportunity for accomplishment.

  26. In this case some items of dimensions Incomeadequate and Highly paid to be correlated but dimension items of Opportunity for Advancement and Highly Paid might not correlated. • In this measure not all the items would related to each other as it’s dimensions address different aspect of job satisfaction. • This measure /scale is termed as Formative scale

  27. In some cases the measure dimensions and items correlate. • In this kind of measure/scale the different dimensions share a common basis ( common interest) • An example is of a scale on Attitude towards the Offer scale. • Since the items are all focused on the price of an item, all the items are related hence this scale is termed as Reflective Scale.

  28. Recap

More Related