Portraits of a Scholar , from the 16 th Century…. ...to today. Artist: Domenico Feti. Artist: Ferdinand Bol. Artist: Rembrandt Harmenszoon Van Rijn. Artist: JKoshi’s photostream , flickr. Principles for Quality Research and Quality Evidence. Ted Kreifels, Ph.D. Overview.
Portraits of a Scholar, from the 16th Century….
Artist: Ferdinand Bol
Artist: Rembrandt Harmenszoon Van Rijn
Artist: JKoshi’sphotostream, flickr
Principles for Quality Research and Quality Evidence
Ted Kreifels, Ph.D.
Importance of good research
Traits of quality research
Standards and methods used to assess quality research and quality evidence
BAD research practices
Common causes of bias in data
How to trust information
Errors in Research
We (typically) have a sincere desire and an interest in determining what is TRUE
based on the information and evidence we have available
Bad research causes real harm and deserves strong censure
Quality Research and Quality Evidence arerelated, but separate topics
Quality Research pertains to the scientific process
Quality Evidence is the sum collection of research data, and pertains to thejudgmentregarding the strength and confidence one has in the findings emanating from the scientific process
If scientific research lacks credibility, it’s difficult to make confident, concrete assertions or predictions
Confidence is obtained by the robustness of the research and the analysis done to synthesize results
Honest and Thorough Reporting
Pose a significant, important, well-defined question that can be investigated empirically and that contributes to the knowledge base
Offer a description of the context and existing information about an issue
Apply methods that best addresses the question of interest
Test questions that are linked to relevant theory and considers various perspectives
Ensure an independent, balanced, and objective approach to the research with clear inferential reasoning supported by a complete coverage of relevant literature
Use appropriate and reliable conceptualization and measurement of variables
Provide sufficient description of the samples, and any comparison groups
Ensure the study design, methods, and procedures are transparent and provides the necessary information to reproduce or replicate the study
Present evidence, with data and analysis in a format that others can reproduce or replicate
Use adequate references, including original sources, alternative perspectives, and criticism
Adhere to quality standards for reporting (i.e., clear, cogent, complete)
Submit research to a peer-review process
The more one aligns to these standards,
thehigher the quality
Following only a few of these principles
is insufficient to assert quality
Evaluate alternative explanations for any findings, discusses critical assumptions, contrary findings, and alternative interpretations
Assess the possible impact of systematic bias
Use Caution to reach conclusions and implications
Designing research questions to reach particular conclusions
Using faulty logic to reach conclusions
Using biased dataand analysis methods
Ignoring limitations of analysis and exaggerating implications of results
Using unqualifiedresearchers not familiar with specialized issues
Not presenting details of key data and analysis for review by others
Citing special interest groups or popular media, rather than peer-viewed professional and academic organizations
And, the MOST COMMON mistake:
Assuming association (events that occur together)…
Proves causation (one event causes another)
Have I missed anything?
Conclusion A: As measured per capita, various safety efforts have FAILED
Conclusion B: Conditions require more people to drive further, yet vehicle handling and safety have improved so people feel safer while increasing risk (driving faster, leaving less distance between cars, etc.)—various safety strategies (e.g. better roads, vehicles, laws) have PASSED
No single right or wrong reference unit—different reference units reflect different perspectives and may affect analytical results
What are your CONCLUSIONS?
What further QUESTIONS would you ask?
Different quality researchers reflect different perspectives, knowledge, and experience
* Sixty Methodological Potholes, David Huron, Ohio State University, 2000
* Evaluating Internet Research Sources, Robert Harris, November 2010
Research and Evidence Challenge:
Is a liquid cup and a dry cup the same measure?
I used the internet to research this question and draw a conclusion
What percentage of internet sources answered: Yes/No?
On PropagandaCollected from several sources including dictionaries, Wikipedia,and* Garth Jowett and Victoria O'Donnell, Propaganda and Persuasion 4th ed. Sage Publications, p. 7
Propaganda is sometimes misrepresented as objective research!
“There are no facts, only interpretations”- Nietzsche
Anyone who denies the value of truth and objective analysis is really bull****ting!
The following section regarding Errors in Research and the workshop case studies were taken from
On Being a Scientist
Responsible Conduct in Research, 2nd Edition
- The National Academy of Sciences (NAS)
- National Academy of Engineering (NAE)
- Institute of Medicine (IOM)
Printed by the National Academy Press, Washington D.C., 1995
The “Honest Error”
Usually caught internally through informal and formal peer review processes
Dealt with internally through evaluations and appointments
Deborah, a third-year graduate student, and Kathleen, a postdoc, have made a series of measurements on a new experimental semi-conductor material using an expensive neutron source at a national laboratory. When they get back to their own lab and examine the data, they get the following data points. A newly proposed theory predicts results indicated by the curve.
During the measurements at the national lab, Deborah and Kathleen observed that there were power fluctuations they could not control or predict. Furthermore, they discussed their work with another group doing similar experiments, and they knew that the other group had gotten results confirming the theoretical prediction and was writing a manuscript describing their results.
In writing up their own results for publication, Kathleen suggests dropping the two anomalous data points near the abscissa (the solid squares) from the published graph and from a statistical analysis. She proposes that the existence of the data points be mentioned in the paper as possibly due to power fluctuations and being outside the expected standard deviation calculated from the remaining data points. “These two runs,” she argues to Deborah, “were obviously wrong.”
How should the data from the two suspected runs be handled?
Should the data be included in tests of statistical significance and why?
What other sources of information, in addition to their faculty advisor, can Deborah and Kathleen use to help decide?
Deborah and Kathleen’s principal obligation, in writing up their results for publication, is to describe what they have done and give the basis for their actions. They must therefore examine how they can meet this obligation within the context of the experiment they have done.
Questions that need to be answered include:
If the authors state in the paper that data have been rejected because of problems with the power supply, should the data points still be included in the published chart?
Should statistical analyses be done that both include and exclude the questionable data?
If conventions within their discipline allow for the use of statistical devices to eliminate outlying data points, how explicit do Deborah and Kathleen need to be in the published paper about the procedures they have followed?
John, a third-year graduate student, is participating in a department-wide seminar where students, postdocs, and faculty members discuss work in progress. An assistant professor prefaces her comments by saying that the work she is about to discuss is sponsored by both a federal grant and a biotechnology firm for which she consults.
In the course of the talk, John realizes that he has been working on a technique that could make a major contribution to the work being discussed. But his faculty advisor consults for a different, and competing, biotechnology firm.
How should john participate in this seminar?
What, if anything, should he say to his advisor—and when?
What implications does this case raise for the traditional openness and sharing of data, materials, and findings that have characterized modern science?
Science thrives in an atmosphere of open communication. When communication is limited, progress is limited for everyone. John therefore needs to weight the advantages of keeping quiet—if, in fact there are any—against the damage that accrues to science if he keeps his suggestions to himself. He might also ask himself how keeping quiet might affect his own life in science.
Does John want to appear to his advisor and his peers as someone who less than forthcoming with his ideas?
Will he enjoy science as much if purposefully limits communication with others?
Why is good research important?
What are the traits of quality research?
Can you provide a few examples of standards and methods used to assess quality research and quality evidence?
What are examples of bad research?
What are a few common causes of bias in data and methodological errors?
How does one trust information from the internet?
What are the three categories of errors in research?