1 / 34

College Educational Quality Research Agenda

College Educational Quality Research Agenda. Corbin M. Campbell, Assistant Professor Theresa Cruz Paul, Research Assistant Higher and Postsecondary Education Program Teachers College, Columbia University. Advisory Board. Jennifer Glaser Director of Student Services

cormac
Download Presentation

College Educational Quality Research Agenda

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. College Educational Quality Research Agenda Corbin M. Campbell, Assistant Professor Theresa Cruz Paul, Research Assistant Higher and Postsecondary Education Program Teachers College, Columbia University

  2. Advisory Board Jennifer Glaser Director of Student Services Fairfax High School Dr. Wendell Hall Deputy Director Institute for Higher Education Policy Dr. Karen Inkelas Director of the Center for the Advanced Study of Teaching and Learning University of Virginia Dr. Christine Keller Executive Director, Voluntary System of Accountability; Associate Vice President for Academic Affairs, APLU Sharon La Voy Director of Assessment University of Maryland Dr. Jennifer Lindholm Special Assistant to the Dean UCLA Accreditation Coordinator University of California Los Angeles Dr. George Mehaffy Vice President for Academic Leadership and Change, American Association of State Colleges and Universities Dr. Jessica Mislevy Research Social Scientist SRI International Dr. Daniel Newhart Senior Researcher & Associate Director Center for the Study of Student Life The Ohio State University Dr. Anna Neumann Professor of Higher Education Teachers College, Columbia University Dr. KerryAnn O'Meara Associate Professor of Higher Education University of Maryland, College Park Dr. Aaron Pallas Professor of Sociology and Education Teachers College, Columbia University Dr. Stephen Porter Professor of Higher Education North Carolina State University Dr. Priscilla Wohlstetter Distinguished Research Professor Teachers College, Columbia University Particular thank you to George Mehaffy and AASCU for hosting us!

  3. Claim There is currently no comprehensive way for the public and prospective students and families to know about the quality of the education that is happening inside the walls of a college or university—and how that quality compares to the quality at other colleges and universities.

  4. Contrast Public: “Black box” of higher education V. Higher Educators: Era of accountability, metrics, testing, learning outcomes, abundance of data!

  5. The Black Box of Higher Education Extensive accountability data is largely unseen by the public: • Accreditation • Collegiate Learning Assessment (CLA) • Course-based learning outcomes • Course evaluations

  6. Impacts of the Black Box [& Rising Costs] • Spellings Commission • Academically Adrift (Arum & Roksa, 2010) • Numerous articles in top newspapers regarding questions to academic rigor in higher education. • 12/10/12: “Who will hold colleges accountable?” (Carey, NYT, p. A27) • “Affront on Higher Education”

  7. How do prospective students, parents, and the public decide which institution has the highest quality education? • US News • World Rankings • Other ranking venues: Princeton Review • Word of Mouth—Family, Friends, High School Guidance Counselors • Reputation [Newer, but under used forms: NSSE?, VSA/College Portrait?]

  8. Questions Asked in These Measures… • What proportion of students graduate? How fast do they graduate? • What is the quality of the entering students? (as measured by SAT primarily)? • What are the resources of an institution? (faculty student ratio; alumni giving) • What is the reputation of this institution?

  9. US News & World Report’s Formula Graduation & retention rates-20% Average graduation rates-80% Average freshman retention rate-20% Financial resources-10% Average educational expenditure per student-100% Alumni giving-5% Graduation rate performance–5% Peer assessment-25% Student selectivity-15% Acceptance rate-10% High school ranking-40% SAT/ACT scores-50% Faculty resources-20% Faculty compensation-35% % faculty with top terminal degrees-15% Percent full time faculty-5% Student/Faculty ratio-5% Class size 1-19 students-30% Class size 50 or more-10%

  10. Possible unintended consequences of past and current measures…. Policy: No Child Left Behind (and Collegiate Learning Assessment—CLA) • Teaching to the test • Altering curriculum Public: US News & World Report • Students and parents use rank instead of fit to select which college to attend. Institutions: US News & World Report • Mission creep/Striving—pulling away from educational core—teaching and learning • Over-reliance on SAT (GWU, for example) • Numbers Manipulation: Cornell removed non-graduates from the alumni list

  11. Questions Absent in These Measures… • What is the level of academic rigor? • What is the quality of teaching? • What are the educational practices that an institution employs that affect student learning? [Maybe a few examples of rankings that use surveys to measure these items, for example, Princeton Review]

  12. Why are these questions absent? • These data are difficult to obtain!! • These data are expensive to obtain!! • Colleges and Universities are protective of academic freedom and are insular with data about the educational core— • Largely because of the fear that the data will not adequately represent their institution and goals.

  13. Enter NSSE, CLA, VSA/College Portrait The National Survey of Student Engagement (NSSE): Surveys institutions about their effective educational practices CLA: Measures student’s critical thinking skills pre and post college via a standardized test. Voluntary System of Accountability (VSA)/College Portrait: Created to ward off imposed and mandating higher education testing. Aimed at the public transparency of higher education—compiles several data sources: NSSE, CLA, Grad/retention PROBLEMS: • Concerns with validity • Relies on a single data collection method • Not primarily intended for the public • Assumes learning is due to college environment • Missing data / limited data

  14. Purpose This research agenda aims to create alternative, innovative, and comprehensive measures of educational quality across institutions that could contribute to public understanding of college and university quality.

  15. What are the intendedconsequences of this new educational quality measure? • A stronger focus on the educational core of institutions: teaching and academic rigor, and educational experiences • Public access to comprehensive data about teaching, academic rigor, and educational experiences in higher education at the institutional level • Administrators having an in-depth understanding of how their institution compares to others in terms of teaching quality, academic rigor, and educational experiences

  16. Three Phases: • Dual-Institution Pilot (Spring 2013) • One large, public, research extensive institution • One medium private research extensive institution • Multi-Institution Peer Benchmarking Pilot (Fall 2014) • National Study with publicly posted Data (Fall 2016)

  17. Defining our Focus • Educational Quality: Effective institutional and educational practices that influence a student’s learning and development during college. [Also note, what we are NOT—e.g. cost, fit, campus services and activities] • Data Collection rather than analysis techniques • Techniques that can be insightful on an institution level—i.e. we are providing data for institutional benchmarking and not the effectiveness of higher education as a whole. We are also not evaluating individual faculty or classrooms.

  18. Defining our Focus • Why focus on the public and prospective students/families? • A transparent system that measures educational experiences would reward institutions that focus on the educational core of their institutions, rather than resources, incoming characteristics, or research productivity.

  19. Academic Rigor • Based on the cognitive complexity required by students in the coursework as defined by the revised Bloom’s Taxonomy (Anderson & Krathwohl, 2001)

  20. Teaching Quality • Based on Anna Neumann’s claims on teaching and learning (2012). • According to this framework, quality teaching entails: (Part I) Orchestrating an encounter of subject matter ideas (Part II) Connecting student’s learning to prior knowledge (Part III) Supporting students in working through the cognitive and emotional features of encounters between their own long-held understandings and new ones gained during the course.

  21. Essential Learning Outcomes • Based on the American Association of Colleges and Universities’ (AAC&U) Essential Learning Outcomes (ELO). • This framework was developed by AAC&U through engagement with hundreds of institutions, accreditors, and higher education stakeholders (AAC&U, 2004). • Four Parts: • ELO Part I: Knowledge of Human Cultures and the Physical and Natural World • ELO Part II: Intellectual and Practical Skills • ELO Part III: Personal and Social Responsibility • ELO Part IV: Integrative and Applied Learning

  22. Dual-institution pilot • One large, public, highly ranked research institution in the East—to determine the ability to conduct this study in a large, multifaceted institution • One medium, private, urban, multi-campus religious institution—to determine the ability to conduct this study in unusual curricular and logistical settings • Investigated the use of: • Course observations • Syllabus analysis • Experience sampling • Analysis of student work • Course evaluations

  23. Was the pilot successful? • 2 institutions rather than 1—and even more volunteered • Stratified random sampling by class size, level of class (intro or advanced), and discipline (college) • 40% response rate of faculty agreeing to have their courses observed—much higher than anticipated • Completed a total of 152 class observations • Collected 154 syllabi • Inter rater reliability moderate to high on class observations • Academic rigor and teaching quality constructs confirmed • Experience sampling and course evaluations—no go!

  24. Lessons Learned… Prior assumption: Institutions are insular and protective with data about academic core Lesson learned: Institutions are hungry for these data, but must be a low level of burden for institutions—time and cost Prior assumption: Faculty will be even more protective of educational data Lesson learned: Faculty see problems with current ranking systems, believe in their teaching practices, and are open to taking part into this study

  25. Lessons learned… Prior assumption: The main concern for faculty is curricular and based on being protective of in-class educational concerns Lesson learned: Main concerns for faculty are: 1) Misuse of time (class time or faculty time); 2) Anonymity; 3) Concerns for students (confidentiality, consent) Prior assumption: Diverse data collection methods yield a different and more comprehensive vantage point for understanding educational quality Lesson learned: CONFIRMED. Witnessing teaching in action; seeing the educational purposes as intended by faculty—different than self report surveys and current ranking mechanisms.

  26. Next Steps • Multi-institutional Peer Benchmarking Pilot (6-10 institutions) in fall of 2014 • Ideally, institutions would be peer institutions and interested in sharing metrics across the peers only (not available to public or outside audiences)

  27. Design of Multi-Institutional Peer Benchmarking Study The research team will measure academic rigor, teaching quality, and Essential Learning Outcomes using 4 different data collection methods: • Short student survey: Survey a representative sample of students across the university—5 minute survey • Syllabus analysis: Use data from a stratified random sample of syllabi across the university • Class observations: A team of Dr. Campbell and TC graduate students will observe a stratified random sample of undergraduate courses over a week-long site visit. • Samples of student work—this is an optional data collection method, based on institutional needs. This would use a pre-existing sample of student work previously collected by an institution (e.g. for accreditation purposes).

  28. Benefits for pilot site institutions • Institutions that participate in the pilot project will receive metrics to understand the pulse of academic rigor, teaching quality, and Essential Learning Outcomes at the institution. There will be NO cost to the institution for this service. This study will produce summative data, not intended for the evaluation of individual faculty, but we can provide data for academic units (college level, not department level). • If a consortium of peer institutions decides to participate in the multi-institutional pilot study and consent to share data, the metrics could be used to benchmark educational quality across consortium institutions. • When we launch a national study and other institutions are asked to pay to participate, the pilot institutions would not be charged. Institutions that participate in the pilot will be able to enroll in the first year of the national study without cost.

  29. Assurances for Pilot Sites • Confidentiality: any identifying data from the pilot site institution would never be released to any entity outside of Dr. Campbell and the research team. • No explicit cost during pilot and first year national study • Minimal burden on institutions, students, and faculty • Access to data for sampling and recruitment • One administrator email of support • One point person for logistics of site visit • One on campus meeting room during week long site visit

  30. Data needs • Contact information for a stratified random sample of undergraduate students for the short student survey • A stratified random sample of undergraduate courses for classroom observation and syllabi analysis • Samples of student work, if desired by site institution • Additional data elements to assist in stratified sampling and recruitment: (e.g. college, class year, category of faculty, faculty email, time and day of course) Note: For ease of data collection, we can pull the samples if institution would prefer to give us entire lists of undergraduates or courses

  31. Timeline • Fall 2013: Recruit site institutions for multi-institutional pilot study • Spring 2014: Obtain IRBapproval and coordinate with academic administrators at site institutions for multi-institutional pilot study • Fall 2014: Conduct multi-institutional pilot study • Spring 2015: Analyze metrics in terms of psychometric validity, practicality of data collection, and usefulness of results for the public • Summer 2015: Multi-institutional benchmarking pilot sites receive reports • Fall 2015: Recruit institutions for national study; apply for additional funding for national study • Spring 2016: Obtain IRBapproval and coordinate with academic administrators at site institutions for national study • Fall 2016: Conduct national study

  32. Additional Resources Select publications and presentations in this research stream: • Campbell, C. M. (in progress). Assessing College Quality: Illuminating the Black Box and Contending with Data Gluttony in Higher Education. Manuscript commissioned for Volume 30 of Higher Education: Handbook of Theory and Research. • Campbell, C.M., Jimenez, M., Benmergui, D., & Walker, C. (November, 2013). A Mosaic of Methods: Measuring College Educational Quality at Two Research Institutions. Paper to be presented at the annual meeting of the Association for the Study of Higher Education, Las Vegas, NV. • Campbell, C. M. & Cabrera, A. (2012, November). Making the Mark: Are Deep Learning and GPA Related? Paper presented at the annual meeting of the Association for the Study of Higher Education, Las Vegas, NV. • Campbell, C. M. & Cabrera, A. (2011). How sound is NSSE? Investigating the psychometric properties of NSSE at a public, research extensive institution. Review of Higher Education, 35, 77-103. • Campbell, C. M. & Mislevy, J. (2012-2013). Student perceptions matter: Early signs of undergraduate student retention/attrition. Journal of College Student Retention, 14(4), 467-493. • Simone, S., Campbell, C. M., & Newhart, D. (2012). Measuring opinion and behavior. In B. Knight, G. McLaughlin, R. Howard (Eds.). Handbook on institutional research. San Francisco, CA: Jossey-Bass

  33. Questions for discussion • Do the current ranking mechanisms adequately represent the educational value of your institutions? • Would your institution fare better using a metric that focuses on educational practices, good college teaching, and academic rigor, rather than resources, reputation, and incoming characteristics? • Would this kind of data be useful to your institutional planning, if you had data across peer institutions? What would make it useful? • What would make it more feasible to conduct this kind of study at your institution? • What would be your concerns about conducting this kind of study at your institution? • What are your thoughts as we consider moving to a national study that would have publicly accessible data? • Other questions/comments?

  34. Questions? Comments?Corbin M. Campbell, Assistant Professorcampbell2@tc.columbia.eduTheresa Cruz Paul, Research AssistantHigher and Postsecondary Education ProgramTeachers College, Columbia University

More Related