evaluating research reports l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Evaluating Research Reports PowerPoint Presentation
Download Presentation
Evaluating Research Reports

Loading in 2 Seconds...

play fullscreen
1 / 67

Evaluating Research Reports - PowerPoint PPT Presentation


  • 245 Views
  • Uploaded on

Evaluating Research Reports. Dr. Aidh Abu Elsoud Alkaissi An-Najah National University Faculty of Nursing. THE RESEARCH CRITIQUE. Nursing practice can be based on solid evidence only if research reports are critically appraised (To estimate the quality).

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

Evaluating Research Reports


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
evaluating research reports

Evaluating ResearchReports

Dr. Aidh Abu Elsoud Alkaissi

An-Najah National University

Faculty of Nursing

the research critique
THE RESEARCHCRITIQUE
  • Nursing practice can be based on solid evidence only if research reports are critically appraised (To estimate the quality).
  • Consumers sometimes think that if a report was accepted for publication, the study must be sound.
  • Unfortunately, this is not the case. Indeed, most research has limitations and weaknesses.
slide3
Although disciplined research is the best possible means of answering many questions, no single study can provide conclusive evidence.
  • Rather, evidence is accumulated through the conduct (manage or control) — and evaluation—of several studies addressing the same or a similar research question.
slide4
Consumers who can do reflective and thorough critiques of research reports also play a role in advancing nursing knowledge.
guidelines for the conduct of a written research critique
Guidelines for the Conduct ofa Written Research Critique
  • 1. Be sure to comment on the study’s strengths as well as weaknesses.
  • The critique should be a balanced analysis of the study’s worth.
  • All reports have some positive features—be sure to find and note them.
  • 2. Give specific examples of the study’s strengths and limitations. Avoid vague generalizations of praise and fault finding.
slide6
3. Justify your criticisms. Offer a rationale for your concerns.
  • 4. Be objective. Avoid being overly critical of a study because you are not interested in the topic or because your world view is inconsistent with the underlying paradigm (model).
slide7
5. Be sensitive in handling negative comments.
  • Put yourself in the shoes of the researcher receiving the comments.
  • Do not be condescending (Displaying a patronizingly superior attitude) or sarcastic (exhibiting lack of respect).
  • 6. Don’t just identify problems—suggest alternatives, indicating how a different approach would have solved a methodologic problem.
  • Make sure the recommendations are practical.
guidelines for critiquing research problems research questions and hypotheses
Guidelines for Critiquing Research Problems, Research Questions, and Hypotheses
  • 1. Has the research problem been clearly identified?
  • Has the researcher appropriately delimited its scope?
  • 2. Does the problem have significance for nursing?
  • How might the research contribute to nursing practice,administration, education, or policy?
  • 3. Is there a good fit between the research problem and the paradigm within which the research was conducted?
slide9
Does the report formally present a statement of purpose, research questions, or hypotheses?
  • Is this information communicated clearly and concisely, and is it placed in a logical and useful location?
  • 5. Are purpose statements or questions worded appropriately (e.g., are key concepts/variables identified and the population of interest specified)?
  • 6. If there are no formal hypotheses, is their absence justifiable? Are statistical tests used despite the absence of stated hypotheses?
slide10
Do hypotheses (if any) flow from a theory or previous research?
  • Is there a justifiable basis for the predictions?
  • 8. Are hypotheses (if any) properly worded—do they state a predicted relationship between two or more variables?
  • Are they directional or nondirectional, and is there a rationale for how they were stated?
  • Are they presented as research or as null hypotheses?
guidelines for critiquing research literature reviews
Guidelines for Critiquing Research Literature Reviews
  • Does the review seem thorough—does it include all or most of the major studies conducted on the topic?
  • Does it include recent work?
  • 2. Does the review cite primarily primary sources (the original studies)?
  • 3. Is the review merely a summary of existing work, or does it critically appraise and compare key studies?
  • Does the review identify important gaps in the literature?
slide12
Does the review use appropriate language, suggesting the tentativeness of prior findings?
  • Is the review objective?
  • 5. Is the review well organized?
  • Is the development of ideas clear?
  • 6. Does the review lay the foundation for undertaking the new study?
guidelines for critiquing theoretical and conceptual frameworks
Guidelines for Critiquing Theoretical and Conceptual Frameworks
  • Does the research report describe a theoretical or conceptual framework for the study? If not, does the absence of a theoretical framework detract (To draw or take away) from the usefulness or significance of the research?
  • 2. Does the report adequately describe the major features of the theory so that readers can understand the conceptual basis of the study?
slide14
3. Is the theory appropriate to the research problem? Would a different theoretical framework have been more appropriate?
slide15
Is the theoretical framework based on a conceptual model of nursing, or is it borrowed from another discipline?
  • Is there adequate justification for the researcher’s decision about the type of framework used?
  • 5. Do the research problem and hypotheses flow naturally from the theoretical framework, or does the link between the problem and theory seem contrived (artificially formal) ?
guidelines for critiquing research designs in quantitative studies
Guidelines for Critiquing Research Designs in Quantitative Studies
  • What would be the most rigorous research design for the research question? How does this correspond to the design actually used?
  • 2. If there is an intervention, was a true experimental, quasi-experimental, or preexperimental design used, and how does this affect the believability of the findings?
slide17
3. If there is an intervention, was it described in sufficient detail?
  • Was the intervention reliably implemented?
  • Is there evidence of treatment “dilution” or contamination of treatments? Were participation levels in the treatment high?
slide18
If the design is nonexperimental, was the study inherently (be a part) nonexperimental? If not, is there an adequate justification for failure to manipulate the independent variable?
  • 5. What types of comparisons are specified in the design (e.g., before–after, between groups)? Do these comparisons adequately illuminate the relationship between independent and dependent variables?
  • If there are no comparisons, does this pose difficulties for interpreting results?
slide19
6. Was the design longitudinal (running lengthwise) or cross-sectional (Studies that collect information about a single point in time, examining conditions in one or more locations, are called cross-sectional studies)—and was this appropriate?
  • Was the number of data collection points reasonable?
  • 7. What procedures were used to control external (situational) factors, and were they adequate and appropriate?
slide20
What procedures were used to control extraneous subject characteristics, and were they adequate and appropriate?
  • 9. To what extent is the study internally valid (the extent to which the findings of a study accurately represent the causal relationship between an intervention and an outcome in the particular )? What alternative explanations must be considered (i.e., what are the threats to the study’s internal validity)?
  • 10. To what extent is the study externally valid (The extent to which a finding applies (or can be generalized) to persons, objects, settings, or times other than those that were the subject of study.? What are the threats to the study’s external validity?
guidelines for critiquing qualitative and mixed method desig
Guidelines for Critiquing Qualitative and Mixed-Method Desig
  • 1. Is the research tradition for the qualitative study identified? If none was identified, can one be inferred? If
  • more than one was identified, is this justifiable or does it suggest “method slurring?”
  • 2. Is the research question congruent with the research tradition (i.e., is the domain of inquiry for the study
  • congruent with the domain encompassed by the tradition)? Are the data sources, research methods, and
  • analytic approach congruent with the research tradition?
  • 3. How well is the design described? Is the design appropriate, given the research question? What design elements
  • might have strengthened the study (e.g., a longitudinal perspective rather than a cross-sectional one)?
slide22
4. Is the study exclusively qualitative, or was the design mixed method, involving both qualitative and quantitative
  • data? Could the design have been strengthened by the inclusion of a quantitative component?
  • 5. If the study used a mixed-method design, how did the inclusion of both approaches contribute to enhanced
  • theoretical insights, enhanced validity, or movement toward new frontiers?
guidelines for critiquing quantitative sampling designs
Guidelines for Critiquing Quantitative Sampling Designs
  • 1. Is the target or accessible population identified and described? Are the eligibility criteria clearly specified?
  • Would a more limited population specification have controlled for important sources of extraneous variation
  • not covered by the research design?
  • 2. What type of sampling plan was used? Does the report make clear whether probability or nonprobability
  • sampling was used?
slide24
3. How were subjects recruited into the sample? Does the method suggest potential biases?
  • 4. How adequate is the sampling plan in terms of yielding a representative sample?
  • 5. If the sampling plan is weak (e.g., a convenience sample), are potential biases identified? Is the sampling
  • plan justified, given the research problem?
slide25
6. Did some factor other than the sampling plan itself (e.g., a low response rate) affect the representativeness
  • of the sample? Did the researcher take steps to produce a high response rate?
  • 7. Are the size and key characteristics of the sample described?
  • 8. Is the sample sufficiently large? Was the sample size justified on the basis of a power analysis?
  • 9. To whom can the study results reasonably be generalized?
guidelines for critiquing qualitative sampling designs
Guidelines for CritiquingQualitative Sampling Designs
  • 1. Is the setting or study group adequately
  • described? Is the setting appropriate for the
  • research question?
  • 2. Are the sample selection procedures described?
  • What type of sampling strategy was used?
  • 3. Given the information needs of the study, was
  • the sampling approach appropriate? Were
  • dimensions of the phenomenon under study
  • adequately represented?
slide27
4. Is the sample size adequate? Did the researcher
  • stipulate that information redundancy was
  • achieved? Do the findings suggest a richly textured
  • and comprehensive set of data without
  • any apparent “holes” or thin areas?
guidelines for data collection procedures
Guidelines for DataCollection Procedures
  • 1. How were data collected? Were multiple methods
  • used and judiciously combined?
  • 2. Who collected the data? Were data collectors
  • judiciously chosen? Do they have traits (e.g.,
  • their professional role, their relationship with
  • study participants) that could have undermined
  • the collection of unbiased, high-quality data?
slide29
3. Was the training of data collectors adequate?
  • Were steps taken to improve their ability to elicit
  • or produce high-quality data or to monitor
  • their performance?
  • 4. Where and under what circumstances were
  • data gathered? Was the setting for data collection
  • appropriate?
slide30
5. Were other people present during data collection?
  • Could the presence of others have resulted
  • in any biases?
  • 6. Did the collection of data place any burdens (in
  • terms of time, stress, privacy issues) on participants?
  • How might this have affected data quality?
guidelines for critiquing self reports
Guidelines for Critiquing Self-Reports
  • INTERVIEWS AND QUESTIONNAIRES
  • 1. Does the research question lend itself to self-report data? Would an alternative method have been more
  • appropriate? Should another method have been used as a supplement?
  • 2. How structured was the approach? Is the degree of structure consistent with the nature of the research question?
  • 3. Do the questions asked adequately cover the complexities of the phenomenon under investigation?
slide32
4. Did the researcher use the best possible mode for collecting self-report data (i.e., personal interviews,
  • telephone interviews, self-administered questionnaires), given the research question and respondent
  • characteristics? Would an alternative method have improved data quality?
slide33
5. [If an instrument is available for review]: Was the instrument too long or too brief? Was there an
  • appropriate blend of open-ended and closed-ended questions? Are questions clearly and sensitively worded?
  • Is the ordering of questions appropriate? Are response alternatives comprehensive? Could questions
  • lead to biased responses?
  • 6. Were the instrument and data collection procedures adequately pretested?
slide34
SCALES
  • 7. If a scale was used, is its use justified? Does it adequately capture the construct of interest?
  • 8. If a new scale was developed for the study, is there adequate justification for not using an existing one?
  • Was the new scale adequately tested and refined?
  • 9. Does the report provide a rationale for using the selected scale (e.g., one particular scale to measure
  • stress, as opposed to other available scales)?
  • 10. Are procedures for eliminating or minimizing response-set biases described, and were they appropriate?
guidelines for critiquing observational methods
Guidelines for Critiquing Observational Methods
  • Does the research question lend itself to an observational approach? Would an alternative data collection
  • method have been more appropriate? Should another method have been used as a supplement?
  • 2. Is the degree of structure of the observational method consistent with the research question?
  • 3. To what degree were observers concealed during data collection? What effect might their known presence
  • have had on the behaviors and events under observation?
slide36
4. What was the unit of analysis of the observations? How much inference was required on the part of the
  • observers, and to what extent might this have led to bias?
  • 5. Where did observations take place? To what extent did the setting influence the “naturalness” of behaviors
  • being observed?
  • 6. How were data recorded (e.g., on field notes or checklists)? Did the recording procedures seem appropriate?
  • 7. What steps were taken to minimize observer bias? How were observers trained, and how was their performance
  • evaluated?
slide37
8. If a category scheme was developed, did it appear appropriate? Do the categories adequately cover the
  • relevant behaviors? Was the scheme overly demanding of observers, leading to potential error? If the
  • scheme was not exhaustive, did the omission of large realms of subject behavior result in an inadequate
  • context for understanding the behaviors or interest?
  • 9. How were events or behaviors sampled? Did this plan appear to yield an adequate or representative sample
  • of relevant behaviors?
guidelines for critiquing biophysiologic measures
Guidelines for CritiquingBiophysiologic Measures
  • 1. Does the research question lend itself to the collection
  • of biophysiologic data? Would an alternative
  • data collection method have been more
  • appropriate? Should another method have been
  • used as a supplement?
  • 2. Was the proper instrumentation used to obtain
  • the biophysiologic measurements, or would an
  • alternative have been more suitable?
slide39
3. Was care taken to obtain accurate data? For
  • example, did the researcher’s activities permit
  • accurate recording?
  • 4. Did the researcher have the skills necessary for
  • proper use and interpretation of the biophysiologic
  • measures?
guidelines for evaluating data quality in quantitative studies
Guidelines for Evaluating Data Quality in Quantitative Studies
  • 1. Is there a congruence between the research variables as conceptualized (i.e., as discussed in the introduction)
  • and as operationalized (i.e., as described in the methods section)?
  • 2. If operational definitions (or scoring procedures) are specified, do they clearly indicate the rules of measurement?
  • Do the rules seem sensible? Were data collected in such a way that measurement errors were
  • minimized?
slide41
3. Does the report offer evidence of the reliability of measures? Does the evidence come from the research
  • sample itself, or is it based on other studies? If the latter, is it reasonable to conclude that data quality for
  • the research sample and the reliability sample would be similar (e.g., are sample characteristics similar)?
  • 4. If reliability is reported, which estimation method was used? Was this method appropriate? Should an alternative
  • or additional method of reliability appraisal have been used? Is the reliability sufficiently high?
slide42
Does the report offer evidence of the validity of the measures? Does the evidence come from the research
  • sample itself, or is it based on other studies? If the latter, is it reasonable to believe that data quality for the
  • research sample and the validity sample would be similar (e.g., are the sample characteristics similar)?
  • 6. If validity information is reported, which validity approach was used? Was this method appropriate? Does
  • the validity of the instrument appear to be adequate?
slide43
7. If there is no reliability or validity information, what conclusion can you reach about the quality of the data
  • in the study?
  • 8. Were the research hypotheses supported? If not, might data quality play a role in the failure to confirm the
  • hypotheses?
guidelines for evaluating data quality in qualitative studies
Guidelines for Evaluating Data Quality in Qualitative Studies
  • Does there appear to be a strong relationship between the phenomena of interest as conceptualized (i.e.,
  • as described in the introduction) and as described in the discussion of the data collection approach?
  • 2. Does the report discuss efforts to enhance or evaluate the trustworthiness of the data? If not, is there other
  • information that allows you to conclude that data are of high quality?
slide45
3. Which techniques (if any) did the researcher use to enhance and appraise data quality? Was the investigator
  • in the field an adequate amount of time? Was triangulation used, and, if so, of what type? Did
  • the researcher search for disconfirming evidence? Were there peer debriefings or member checks? Do the
  • researcher’s qualifications enhance the credibility of the data? Did the report include information on
  • the audit trial for data analysis?
slide46
4. Were the procedures used to enchance and document data quality adequate?
  • 5. Given the efforts to enhance data quality, what can you conclude about the credibility, transferability,
  • dependability, and confirmability of the data? In light of this assessment, how much faith can be placed in
  • the results of the study?
guidelines for critiquing quantitative analyses
Guidelines for Critiquing Quantitative Analyses
  • 1. Does the report include any descriptive statistics? Do these statistics sufficiently describe the major characteristics
  • of the researcher’s data set?
  • 2. Were indices of both central tendency and variability provided in the report? If not, how does the absence
  • of this information affect the reader’s understanding of the research variables?
  • 3. Were the correct descriptive statistics used (e.g., was a median used when a mean would have been more
  • appropriate)?
slide48
4. Does the report include any inferential statistics? Was a statistical test performed for each of the hypotheses
  • or research questions? If inferential statistics were not used, should they have been?
  • 5. Was the selected statistical test appropriate, given the level of measurement of the variables?
  • 6. Was a parametric test used? Does it appear that the assumptions for the use of parametric tests were met?
  • If a nonparametric test was used, should a more powerful parametric procedure have been used instead?
slide49
7. Were any multivariate procedures used? If so, does it appear that the researcher chose the appropriate
  • test? If multivariate procedures were not used, should they have been? Would the use of a multivariate procedure
  • have improved the researcher’s ability to draw conclusions about the relationship between the
  • dependent and independent variables?
slide50
8. In general, does the report provide a rationale for the use of the selected statistical tests? Does the report
  • contain sufficient information for you to judge whether appropriate statistics were used?
  • 9. Was there an appropriate amount of statistical information reported? Are the findings clearly and logically
  • organized?
slide51
10. Were the results of any statistical tests significant? What do the tests tell you about the plausibility of the
  • research hypotheses?
  • 11. Were tables used judiciously to summarize large masses of statistical information? Are the tables clearly
  • presented, with good titles and carefully labeled column headings? Is the information presented in the text
  • consistent with the information presented in the tables? Is the information totally redundant?
guidelines for critiquing qualitative analyses
Guidelines for Critiquing Qualitative Analyses
  • 1. Given the nature of the data, were they best analyzed qualitatively? Were the data analysis techniques
  • appropriate for the research design?
  • 2. Is the initial categorization scheme described? If so, does the scheme appear logical and complete? Does
  • there seem to be unnecessary overlap or redundancy in the categories?
slide53
3. Were manual methods used to index and organize the data, or was a computer program used?
  • 4. Is the process by which a thematic analysis was performed described? What major themes emerged? If
  • excerpts from the data are provided, do the themes appear to capture the meaning of the narratives—that
  • is, does it appear that the researcher adequately interpreted the data and conceptualized the themes?
  • 5. Is the analysis parsimonious—could two or more themes be collapsed into a broader and perhaps more
  • useful conceptualization?
slide54
6. What evidence does the report provide that the researcher’s analysis is accurate and replicable?
  • 7. Were data displayed in a manner that allows you to verify the researcher’s conclusions? Was a conceptual
  • map, model, or diagram effectively displayed to communicate important processes?
slide55
8. Was the context of the phenomenon adequately described? Does the report give you a clear picture of the
  • social or emotional world of study participants?
  • 9. If the result of the study is an emergent theory or conceptualization, does it yield a meaningful and insightful
  • picture of the phenomenon under study? Is the resulting theory or description trivial or obvious?
guidelines for critiquing the ethical aspects of a study
Guidelines for Critiquing the Ethical Aspects of a Study
  • 1. Were study participants subjected to any physical harm, discomfort, or psychological distress? Did the
  • researchers take appropriate steps to remove or prevent harm or minimize discomfort?
  • 2. Did benefits to participants outweigh any potential risks or actual discomfort they experienced? Did the
  • benefits to society or nursing outweigh costs to participants?
slide57
3. Was any coercion or undue influence used in recruiting participants?
  • 4. Were groups omitted from the inquiry (e.g., women, minorities) without a justifiable rationale?
  • 5. Were vulnerable subjects used? Were special precautions instituted because of their vulnerable status?
  • 6. Were participants deceived in any way? Were they fully aware of participating in a study and did they
  • understand the purpose of the research?
slide58
7. Did participants have an opportunity to decline participation? Were appropriate consent procedures
  • implemented? If not, were there valid and justifiable reasons?
  • 8. Were participants told about any real or potential risks associated with participation in the study? Were
  • study procedures fully described in advance?
slide59
9. Were appropriate steps taken to safeguard the privacy of participants?
  • 10. Was the study approved and monitored by an Institutional Review Board or other similar ethics review
  • committee? If not, did the researcher have any type of external review relating to ethical considerations?
slide60
11. What are the major limitations of the design? Are these limitations acknowledged and taken into account
  • in the interpretation of results?
  • 12. Could the design have been strengthened by the inclusion of a qualitative component?
guidelines for critiquing the interpretive dimensions of a research report
Guidelines for Critiquing the Interpretive Dimensions of a Research Report
  • INTERPRETATION OF THE FINDINGS
  • 1. Are all important results discussed? If not, what is the likely explanation for omissions?
  • 2. Are interpretations consistent with results? Do the interpretations take into account methodologic limitations?
  • 3. What types of evidence are offered in support of the interpretation, and is that evidence persuasive? Are
  • results interpreted in light of findings from other studies? Are results interpreted in terms of the original
  • study hypotheses and the conceptual framework?
slide62
4. Are alternative explanations for the findings mentioned, and is the rationale for their rejection presented?
  • 5. In quantitative studies, does the interpretation distinguish between practical and statistical significance?
  • 6. Are any unwarranted interpretations of causality made?
slide63
IMPLICATIONS OF THE FINDINGS
  • 7. Does the researcher offer implications of the research for nursing practice, nursing theory, or nursing
  • research? Are implications of the study omitted, although a basis for them is apparent?
  • 8. Are the stated implications appropriate, given the study’s limitations?
  • 9. Are generalizations made that are not warranted on the basis of the sample used?
slide64
RECOMMENDATIONS
  • 10. Are specific recommendations made concerning how the study’s methods could be improved? Are there
  • recommendations for future research investigations?
  • 11. Are recommendations for specific nursing actions presented?
  • 12. Are recommendations consistent with the findings and with the existing body of knowledge?
guidelines for critiquing the presentation of a research report
Guidelines for Critiquing the Presentation of a Research Report
  • Does the report include a sufficient amount of detail to permit a thorough critique of the study’s purpose,
  • conceptual framework, design and methods, handling of critical ethical issues analysis of data, and interpretation?
  • 2. Is the report well written and grammatical? Are pretentious words or jargon used when a simpler wording
  • would have been possible?
  • 3. Is the report well organized, or is the presentation confusing? Is there an orderly, logical presentation of
  • ideas? Are transitions smooth, and is the report characterized by continuity of thought and expressi
slide66
Is the report sufficiently concise or does the author include a lot of irrelevant detail?
  • Are important details omitted?
  • 5. Does the report suggest overt biases?
  • 6. Is the report written using tentative language as befits the nature of disciplined inquiry, or does the author
  • talk about what the study did or did not “prove?”
slide67
7. Is sexist language avoided?
  • 8. Does the title of the report adequately capture the key concepts and the population under investigation?
  • Does the abstract (if any) adequately summarize the research problem, study methods, and important
  • findings?