Survey Methods in Usability. Focus on web-based surveys. Content Based Strategy. Identify everything related to the concept that you are testing. Content can be based on expert opinion, user observation, a theory etc.
Focus on web-based surveys
>>For example: You can reason that a good interface should easy to remember and pleasing to the eye.easy to remember etc.
Advantage: economical method
Disadvantage: the content that you derive the questions from might not be correct
Let the data speak for itself.
Identify items related to the concept. Administer this scale to the relevant sample. Use statistical procedures ( e.g., item analysis or factor analysis) to identify items related to the concept you are interested in.
>>For example: For a usability scale identify a large number of usability items, and administer them to the sample of users.
Items are selected on the basis of their ability to differentiate between two groups of people.
Method: Develop scale. Validate it against a criterion population.
>> For example: For a scale of usability, independently identify a few good and bad web-sites. Select the items which can distinguish the good and bad sites.
Generate items for a scale of usability from previous scales and articles in the field (content based)
Select and retain scale items by item analysis (statistical methods)
Evaluate scale by testing its ability to discriminate web sites by comparing with criterion, i.e., experts evaluation of good and bad sites (criterion based method)
This web-site’s navigation structure is:
Excellent Good Fair Poor Bad
1 2 3 4 5
For example: Rate your mother
Bad __ __ __ __ __ __ __ Good
Weak __ __ __ __ __ __ __ Strong
The two end points and the in between points are described by graphic descriptions which denote magnitude of the variable being measured.
All the Almost all Most of Sometimes Never
time the time the times
Items are compared with other similar items (standards) on some dimension. Standards can also be brief behavioral descriptions instead of actual items.
Notepad WordPad WordPerfect MS Word
(BARS) attempt to make the terminology of rating scales more descriptive of actual behavior and therefore more objective.
For example: When I am using Microsoft Word
and the Office Assistant pops up, I am:
Rater is provided with two descriptive statements/options that are matched in social desirability.
For example: What is your previous experience with video-communication services?
I have had no previous experience
I use it very often
Constant Error: or range restrictions occur when ratings tend to be clustered in one part of the scale
leniency error: in the higher part of the scale
severity error: in the lower part of the scale
central tendency error: in the middle range
Such tendencies do not always constitute errors. There can be cultural differences.
To check for it:Compare each raters rating with mean (without the rater) for each item.
The tendency to respond to a general impression of ratee and or to overgeneralize favorable / unfavorable ratings based on impression of a few dimensions.
For example: You are asked to rate a software you don’t know much about. Once you used it for a few minutes and it crashed. Now all your ratings will be based on that impression.
Solution: Add a “don’t know”, “undecided” option
The tendency to change rating because of the effect of some anchor point:
(a) assigning a higher rating than justified if item before received very low rating or vice versa. This is more problematic since it is a systematic error.
Solution: Can deal with it by randomizing order.
(b) tendency to use self as an anchor in assigning rating. If this is a constant effect, the it might not matter.
The actual location of item on page might effect rating. For example: raters often assign similar ratings to a person on items that are closer together on a printed page.
Solution: randomize order
Most recent performance: Ratee is judged not on impression but their most recent impression.
If questions are ambiguous!
This will effect all items.
Solution: Pilot and ask respondents how they interpreted questions.
Anchors are the verbal comments above the numbers ('strongly agree', etc.).
Factual questions: having anchors above all the response options will give more accurate results.
Opinion or attitude work: it is good to indicate the central (neutral) point but anchors might not be as crucial.
Factual questions: not so important, unless issues of privacy are involved.
Opinion questionnaire: if many respondents complain about items 'not being applicable' to the situation, you should consider carefully whether these items should be changed or re-worded.
>>Alternate questions from one to other user. >>Pose different subsequent questions to different users based on previous response.
Degree of Control: A person might have positive or negative attitude towards Object A, but not have any control over the action.
Ask about control of behavior
Directly formed attitudes predict behavior more than indirectly formed attitudes:
For example: If a person who has direct experience of customer service at Etrade tells you the service “sucks”, it is more likely to correlate with behavior than an indirectly formed attitude.
Attitudes and norms in the immediate social context: Attitudes are also affected by the social norms around the person.
For example: I might have a neutral attitude towards Microsoft, but if I hang out with a bunch of people who hate Microsoft, I am likely to be affected.
Attitudes and Values:Attitudes affect behavior more if they are in accordance with person’s value system.
For Example:I might think Amazon.com is a pretty good site. But maybe I have strong values about the promotion of small independent bookstores and try to promote their use. In that case, my opinion of Amazon.com's usability will not affect my behavior towards it.