1 / 12

Basics

Basics . Syllabus. Syllabus Requirements Readings Topics Goals. Chapter 1--basics. Types of studies Descriptive Relational Causal Cross-sectional vs. longitudinal Repeated measures vs. time series Third variable problem—examples? Independent vs. dependent variable

tress
Download Presentation

Basics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Basics

  2. Syllabus • Syllabus • Requirements • Readings • Topics • Goals

  3. Chapter 1--basics • Types of studies • Descriptive • Relational • Causal • Cross-sectional vs. longitudinal • Repeated measures vs. time series • Third variable problem—examples? • Independent vs. dependent variable • Exhaustive and mutually exclusive categories (and relevant ones)

  4. Theory and op defs • What makes a good hypothesis? • When and how is qualitative research used in psych? • When might your unit of analysis not be individuals? • What are examples of the ecological (conclusions about groups based on individuals) and exception (group conclusion based on individuals) fallacy in research? • Is theoretical research better? What is good and bad about it? • Why are operational definitions important?

  5. Philosophy and validity • How do inductive and deductive research fit together? • Explain metaphysics, positivism, determinism, post-positivism, critical realism, constructivism, and evolutionary epistemology. What do these approaches have to do with how science is conducted and interpreted? • What are the differences between conclusion, internal, construct, and external validity? How can each be assessed? • What are threats to conclusion validity?

  6. Ethics • Where should we be at in terms of protecting participants vs. letting science advance? What is the line? • What are issues of confidentiality and anonymity in research? What might cause risks to confidentiality? • How can we deal with the right to service in research?

  7. Idea generation • How do you come up with ideas? How do you know if they are good? • What types of feasibility issues should you consider in research? • Why is a literature review important? How should it be done?

  8. HARKing (Kerr, 1998) • What is HARKing? What is the alternative? • What examples have you seen? How often do you think it occurs? • How can it be identified? • Do scientists approve of it? • What are the reasons for HARKing? • Do we always need a hypothesis? • Why don’t we put more emphasis on disconfirmation?

  9. What are the costs of HARKing? • How does this relate to the file drawer problem? • Is it ethical? • Does it negatively affect our perceptions by others (cases of fraud, use as criticism)? • Are the benefits greater than the costs? • How can we discourage it?

  10. False-positive psychology (Simmons et al., 2011) • False positive=incorrect rejection of null hypo • What do they mean by researcher degrees of freedom? • What are some common ones? • Do these things cause a problem? How?

  11. Suggestions for authors • Authors should decide ahead of time how they will decide when to stop collecting data and report it. • Authors need at least 20 observations per cell. • Authors should report all variables collected. • Authors should report all experimental conditions, whether they worked or not. • Authors should report analyses with outliers in as well. • Authors should report analyses without covariates as well.

  12. Suggestions for reviewers • Reviewers should ensure that authors do what they’re supposed to. • Reviewers should be more open to messy results. • Reviewers should make sure results don’t depend on arbitrary decisions. • Reviewers should require exact replications. • Why don’t the other ideas work?

More Related