1 / 39

Assessment – Change &Improvement

University/Program Mission. Institutional/Program Goals. Institutional/Program Learning Outcomes. Assessment – Change &Improvement. Assessment - Best Practices Methods & Measures. Assessment - Collect and Evaluate. Institutional/Program Goals. Institutional/Program

leena
Download Presentation

Assessment – Change &Improvement

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. University/Program Mission Institutional/Program Goals Institutional/Program Learning Outcomes Assessment – Change &Improvement Assessment - Best Practices Methods & Measures Assessment - Collect and Evaluate office of institutional assessment

  2. Institutional/Program Goals Institutional/Program Learning Outcomes Assessment – Change &Improvement Assessment - Best Practices Methods & Measures Assessment - Collect and Evaluate University/Program Mission Defining the mission office of institutional assessment

  3. University Mission Statement The core mission of the University of San Francisco is to promote learning in the Jesuit Catholic tradition. The University offers undergraduate, graduate and professional students the knowledge and skills needed to succeed as persons and professionals, and the values and sensitivity necessary to be men and women for others. The University will distinguish itself as a diverse, socially responsible learning community of high quality scholarship and academic rigor sustained by a faith that does justice. The University will draw from the cultural, intellectual and economic resources of the San Francisco Bay Area and its location on the Pacific Rim to enrich and strengthen its educational programs.

  4. Program Mission Core Curriculum The University's Core Curriculum embodies the Jesuit, Catholic tradition that views faith, reason, and service to others as complementary resources in the search for truth and full human development. The Core promotes these values through their integration across the curriculum. As it develops its course offerings, the University affirms its commitment to provide our students with learning opportunities that embrace the fullness of the Catholic intellectual tradition.

  5. University/Program Mission Institutional/Program Learning Outcomes Assessment – Change &Improvement Assessment - Best Practices Methods & Measures Assessment - Collect and Evaluate Institutional/Program Goals Defining the Goals office of institutional assessment

  6. University Mission Driven Learning Goals

  7. Program Learning Goals Core Curriculum

  8. University/Program Mission Institutional/Program Goals Assessment – Change &Improvement Assessment - Best Practices Methods & Measures Assessment - Collect and Evaluate Institutional/Program Learning Outcomes Defining the Outcomes office of institutional assessment

  9. University Mission Driven Learning Outcomes (example)

  10. Course Learning Outcomes Rhetoric and Composition

  11. Course-Goals Matrix Core Curriculum I = Introduced/Discussed R = Reinforced A = Advanced

  12. University/Program Mission Institutional/Program Goals Institutional/Program Learning Outcomes Assessment – Change &Improvement Assessment - Collect and Evaluate Assessment - Best Practices Methods & Measures office of institutional assessment

  13. Program / Practice • Select and develop assessment methods that are appropriate to departmental goals and outcomes. Should answer 4 questions: • Does the program meet or exceed certain standards? • How does the program compare to others? • Does the program do a good job at what it sets out to do? • How can the program experience be improved? • 9 principles of good assessment • Assessment terminology Effective methods of assessment provide both positive and negative feedback. Finding out what is working well is only one goal of program assessment.

  14. Assessment methods: Direct assessment methods Direct methods ask students to demonstrate their learning while indirect methods ask them to reflect on their learning. Direct methods include some objective tests, essays, presentations and classroom assignments. Indirect assessment methods Indirect methods include surveys and interviews. Program / Practice

  15. University/Program Mission Institutional/Program Goals Institutional/Program Learning Outcomes Assessment – Change &Improvement Assessment - Best Practices Methods & Measures Assessment - Collect and Evaluate office of institutional assessment

  16. Observe & Measure • Choosing the right evaluation instrument: • Does it link to the learning outcomes? • Is it appropriately comprehensive? • Is it a part of a triangulation strategy of evaluation? • Is it clear and are interpretations consistent? • Is it useful for informing improvement? • Do the results make sense? • Is it timely and practical? • Is evidence gathered across time and across situations? • Is the amount of evaluation appropriate so as to not be overwhelming? • Are there appropriate faculty and staff resources to support the assessment plan?

  17. Observe & Measure • Include qualitative as well as quantitative measures. All assessment measures do not have to involve quantitative measurement. A combination of qualitative and quantitative methods can offer the most effective way to assess goals and outcomes. Use an assessment method that matches your departmental culture. For example, in a department where qualitative inquiry is particularly valued, these types of methods should be incorporated into the plan. The data you collect must have meaning and value to those who will be asked to make changes based on the findings. • Qualitative assessment measures • Quantitative assessment measures • Formative vs. Summative Assessment • Benchmarking • Tools and Techniques

  18. University/Program Mission Institutional/Program Goals Institutional/Program Learning Outcomes Assessment - Best Practices Methods & Measures Assessment - Collect and Evaluate Assessment – Change & Improvement office of institutional assessment

  19. Record Improvement • The action plan: • Institutional commitment to action • Sharing of assessment results • Campus discussion • Shared decision making • Empowerment • Providing resources • Being flexible • Ensuring learning has occurred

  20. Defining the Mission • The mission is: • A lens that guides assessment, decision making, goal setting and priority setting. • Holistic vision of the values and philosophy of the department. • Broad statement of what matters to faculty.

  21. Defining the Mission • The mission addresses: • What you stand for? • How you set yourself apart from your peers? • What you want to be known for? • Where you are going?

  22. Defining the Mission Program Missions: Align with the University Mission?

  23. Defining the Learning Goals • Learning goals describe: • Broad statements concerning KSAs of graduating students. • What you want students to learn. • What you want students “to be.” • Too general to guide assessment and planning, that is the job of learning outcomes.

  24. Defining the Learning Outcomes • Learning outcomes describe: • The specific behaviors, skills, or abilities that informs you that a learning goal has been achieved. • The evidence that would convince a skeptic that your students are achieving a set of goals. • The evidence that students are “getting it.” • What you want students “to know.” • They transform learning goals into specific student performance and behaviors that demonstrate learning and skill development.

  25. Defining the Learning Outcomes • Learning outcomes: • Make the learning goals explicit. • Describe what program goals mean. • Focus on learner and what they learn. • Explains student mastery of program goals • Comprehensively define each goal. • Describe observable behavior. • Use active verbs that describe a specific outcome. • Identify the depth of processing that faculty expect. • Should clarify faculty expectations for absolute or value-added • attainment.

  26. Defining the Learning Outcomes • Types of outcomes: • Cognitive: What you want students to “know.” • Affective: What you want students to “think.” • Behavioral: What you want students to “be able to do.” • Outcomes reflect different levels of learning, mastery of a skill or the development of higher order learning.

  27. Program / Practice 9 Principles of Good Assessment • The assessment of student learning begins with educational values. Assessment is not an end in itself but a vehicle for educational improvement. Its effective practice, then, begins with and enacts a vision of the kinds of learning we most value for students and strive to help them achieve. Educational values should drive not only what we choose to assess but also how we do so. Where questions about educational mission and values are skipped over, assessment threatens to be an exercise in measuring what's easy, rather than a process of improving what we really care about. • Assessment is most effective when it reflects an understanding of learning as multidimensional, integrated, and revealed in performance over time. Learning is a complex process. It entails not only what students know but what they can do with what they know; it involves not only knowledge and abilities but values, attitudes, and habits of mind that affect both academic success and performance beyond the classroom. Assessment should reflect these understandings by employing a diverse array of methods, including those that call for actual performance, using them over time so as to reveal change, growth, and increasing degrees of integration. Such an approach aims for a more complete and accurate picture of learning, and therefore firmer bases for improving our students' educational experience. • Assessment works best when the programs it seeks to improve have clear, explicitly stated purposes. Assessment is a goal-oriented process. It entails comparing educational performance with educational purposes and expectations -- those derived from the institution's mission, from faculty intentions in program and course design, and from knowledge of students' own goals. Where program purposes lack specificity or agreement, assessment as a process pushes a campus toward clarity about where to aim and what standards to apply; assessment also prompts attention to where and how program goals will be taught and learned. Clear, shared, implementable goals are the cornerstone for assessment that is focused and useful. • Assessment requires attention to outcomes but also and equally to the experiences that lead to those outcomes. Information about outcomes is of high importance; where students "end up" matters greatly. But to improve outcomes, we need to know about student experience along the way -- about the curricula, teaching, and kind of student effort that lead to particular outcomes. Assessment can help us understand which students learn best under what conditions; with such knowledge comes the capacity to improve the whole of their learning.

  28. Program / Practice 9 Principles of Good Assessment • Assessment works best when it is ongoing not episodic. Assessment is a process whose power is cumulative. Though isolated, "one-shot" assessment can be better than none, improvement is best fostered when assessment entails a linked series of activities undertaken over time. This may mean tracking the process of individual students, or of cohorts of students; it may mean collecting the same examples of student performance or using the same instrument semester after semester. The point is to monitor progress toward intended goals in a spirit of continuous improvement. Along the way, the assessment process itself should be evaluated and refined in light of emerging insights. • Assessment fosters wider improvement when representatives from across the educational community are involved. Student learning is a campus-wide responsibility, and assessment is a way of enacting that responsibility. Thus, while assessment efforts may start small, the aim over time is to involve people from across the educational community. Faculty play an especially important role, but assessment's questions can't be fully addressed without participation by student-affairs educators, librarians, administrators, and students. Assessment may also involve individuals from beyond the campus (alumni/ae, trustees, employers) whose experience can enrich the sense of appropriate aims and standards for learning. Thus understood, assessment is not a task for small groups of experts but a collaborative activity; its aim is wider, better-informed attention to student learning by all parties with a stake in its improvement • Assessment makes a difference when it begins with issues of use and illuminates questions that people really care about. Assessment recognizes the value of information in the process of improvement. But to be useful, information must be connected to issues or questions that people really care about. This implies assessment approaches that produce evidence that relevant parties will find credible, suggestive, and applicable to decisions that need to be made. It means thinking in advance about how the information will be used, and by whom. The point of assessment is not to gather data and return "results"; it is a process that starts with the questions of decision-makers, that involves them in the gathering and interpreting of data, and that informs and helps guide continuous improvement.

  29. Program / Practice 9 Principles of Good Assessment • Assessment is most likely to lead to improvement when it is part of a larger set of conditions that promote change. Assessment alone changes little. Its greatest contribution comes on campuses where the quality of teaching and learning is visibly valued and worked at. On such campuses, the push to improve educational performance is a visible and primary goal of leadership; improving the quality of undergraduate education is central to the institution's planning, budgeting, and personnel decisions. On such campuses, information about learning outcomes is seen as an integral part of decision making, and avidly sought. • Through assessment, educators meet responsibilities to students and to the public. There is a compelling public stake in education. As educators, we have a responsibility to the publics that support or depend on us to provide information about the ways in which our students meet goals and expectations. But that responsibility goes beyond the reporting of such information; our deeper obligation -- to ourselves, our students, and society -- is to improve. Those to whom educators are accountable have a corresponding obligation to support such attempts at improvement. • Authors: Alexander W. Astin; Trudy W. Banta; K. Patricia Cross; Elaine El-Khawas; Peter T. Ewell; Pat Hutchings; Theodore J. Marchese; Kay M. McClenney; Marcia Mentkowski; Margaret A. Miller; E. Thomas Moran; Barbara D. Wright This document was developed under the auspices of the AAHE Assessment Forum (Barbara Cambridge is Director) with support from the Fund for the Improvement of Post-Secondary Education with additional support for publication and dissemination from the Exxon Education Foundation. Copies may be made without restriction. AAHE site maintained by: Mary C. Schwarz mjoyce@aahe.org

  30. Program / Practice Assessment Terminology • Assessment for accountability: assessment of some unit (could be a department, program or entire institution) to satisfy stakeholders external to the unit itself. Results are often compared across units. Always summative. Example: to retain state approval, the achievement of a 90 percent pass rate or better on teacher certification tests by graduates of a school of education. • Assessment for improvement: assessment that feeds directly, and often immediately, back into revising the course, program or institution to improve student learning results. Can be formative or summative (see "formative assessment" for an example). • Assessment of individuals: uses the individual student, and his/her learning, as the level of analysis. Can be quantitative or qualitative, formative or summative, standards-based or value added, and used for improvement. Would need to be aggregated if used for accountability purposes. Examples: improvement in student knowledge of a subject during a single course; improved ability of a student to build cogent arguments over the course of an undergraduate career. • Assessment of institutions: uses the institution as the level of analysis. Can be quantitative or qualitative, formative or summative, standards-based or value added, and used for improvement or for accountability. Ideally institution-wide goals and outcomes would serve as a basis for the assessment. Example: how well students across the institution can work in multi-cultural teams as sophomores and seniors. • Assessment of programs: uses the department or program as the level of analysis. Can be quantitative or qualitative, formative or summative, standards-based or value added, and used for improvement or for accountability. Ideally program goals and outcomes would serve as a basis for the assessment. Example: how sophisticated a close reading of texts senior English majors can accomplish (if used to determine value added, would be compared to the ability of newly declared majors). • Direct assessment of learning: gathers evidence, based on student performance, which demonstrates the learning itself. Can be value added, related to standards, qualitative or quantitative, embedded or not, using local or external criteria. Examples: most classroom testing for grades is direct assessment (in this instance within the confines of a course), as is the evaluation of a research paper in terms of the discriminating use of sources. The latter example could assess learning accomplished within a single course or, if part of a senior requirement, could also assess cumulative learning. • External assessment: use of criteria (rubric) or an instrument developed by an individual or organization external to the one being assessed. Usually summative, quantitative, and often high-stakes (see below). Example: GRE exams.

  31. Program / Practice Assessment Terminology • Embedded assessment: a means of gathering information about student learning that is built into and a natural part of the teaching-learning process. Often uses for assessment purposes classroom assignments that are evaluated to assign students a grade. Can assess individual student performance or aggregate the information to provide information about the course or program; can be formative or summative, quantitative or qualitative. Example: as part of a course, expecting each senior to complete a research paper that is graded for content and style, but is also assessed for advanced ability to locate and evaluate Web-based information (as part of a college-wide outcome to demonstrate information literacy). • Formative assessment: the gathering of information about student learning-during the progression of a course or program and usually repeatedly-to improve the learning of those students. Example: reading the first lab reports of a class to assess whether some or all students in the group need a lesson on how to make them succinct and informative. • "High stakes" use of assessment: the decision to use the results of assessment to set a hurdle that needs to be cleared for completing a program of study, receiving certification, or moving to the next level. Most often the assessment so used is externally developed, based on set standards, carried out in a secure testing situation, and administered at a single point in time. Examples: at the secondary school level, statewide exams required for graduation; in postgraduate education, the bar exam. • Indirect assessment of learning: gathers reflection about the learning or secondary evidence of its existence. Example: a student survey about whether a course or program helped develop a greater sensitivity to issues of diversity. • Local assessment: means and methods that are developed by an institution's faculty based on their teaching approaches, students, and learning goals. Can fall into any of the definitions here except "external assessment," for which is it an antonym. Example: one college's use of nursing students' writing about the "universal precautions" at multiple points in their undergraduate program as an assessment of the development of writing competence. • Learning Goals: are the general aims or purposes of a program and its curriculum. Effective goals are broadly stated, meaningful, achievable and assessable. Goals provide a framework for determining the more specific educational outcomes of a program, and should be consistent with program and institutional mission.

  32. Program / Practice Assessment Terminology • Learning Outcomes: are operational statements describing specific student behaviors that evidence the acquisition of desired knowledge, skills, abilities, capacities, attitudes or dispositions. Learning outcomes can be usefully thought of as behavioral criteria for determining whether students are achieving the educational outcomes of a program, and, ultimately, whether overall program goals are being successfully met. • Qualitative assessment: collects data that does not lend itself to quantitative methods but rather to interpretive criteria (see the first example under "standards"). • Quantitative assessment: collects data that can be analyzed using quantitative methods (see "assessment for accountability" for an example). • Standards: sets a level of accomplishment all students are expected to meet or exceed. Standards do not necessarily imply high quality learning; sometimes the level is a lowest common denominator. Nor do they imply complete standardization in a program; a common minimum level could be achieved by multiple pathways and demonstrated in various ways. Examples: carrying on a conversation about daily activities in a foreign language using correct grammar and comprehensible pronunciation; achieving a certain score on a standardized test. • Summative assessment: the gathering of information at the conclusion of a course, program, or undergraduate career to improve learning or to meet accountability demands. When used for improvement, impacts the next cohort of students taking the course or program. Examples: examining student final exams in a course to see if certain specific areas of the curriculum were understood less well than others; analyzing senior projects for the ability to integrate across disciplines. • Triangulation: involves the collection of data via multiple methods in order to determine if the results show a consistent outcome. • Value added: the increase in learning that occurs during a course, program, or undergraduate education. Can either focus on the individual student (how much better a student can write, for example, at the end than at the beginning) or on a cohort of students (whether senior papers demonstrate more sophisticated writing skills-in the aggregate-than freshmen papers). Requires a baseline measurement for comparison.

  33. Program / Practice Direct Assessment Measures • Faculty-designed comprehensive examinations and assignments • Professionally judged performances or demonstrations of abilities in context • Portfolios of student work compiled over time • Samples of representative student work generated in response to typical course assignments. • Scores and pass rates on appropriate licensure/certification exams (e.g., Praxis, NLN) or other published tests (e.g., Major Field Tests, CLA, MAAP) • Summaries/analyses of electronic discussion threads • Ratings of student skills by field experience supervisors • Tests, Exams (Final Qualifying, and Comprehensive), • Essays • Presentations • Dissertations • Exhibitions • Classroom Assignments • “Capstone” Experiences (Research Projects, Theses, Oral Defenses, or Performances Scored Using a Rubric)

  34. Program / Practice Indirect Assessment Measures • Student satisfaction surveys • Focus groups, or interviews • Graduation rates • Self-reported gains • Quality/reputation of graduate and four-year programs into which alumni are accepted • Placement rates of graduates into appropriate career positions and starting salaries • Alumni perceptions of their career responsibilities and satisfaction • Off-the-shelf surveys (e.g., NSSE, BCSSE, LSSE, SSI, etc.) • Student ratings of their knowledge and skills and reflections • Grades

  35. Program / Practice Qualitative Assessment Measures • Qualitative measures “rely on descriptions rather than numbers” (Palomba and Banta 1999). • Ethnographic studies • Exit interviews • Formal recitals • Participant observations • Writing samples • Open-ended questions on surveys • Interviews

  36. Program / Practice Quantitative Assessment Measures • Quantitative measures assess teaching and learning by collecting and analyzing numeric data using statistical techniques. • GPA • Grades • Primary trait analysis scores • Exam scores • Demographics • Forced-choice surveys • Standardized teaching evaluations

  37. Observe & Measure Formative vs. Summative Evaluation • Formative evaluation: • Ongoing assessment intended to improve the student’s learning at the course, program or institutional level. • Summative evaluation: • Occurs at the end of a unit, course, or program to determine the achievement of overall goals. • Benchmarking: • Systematic comparison of outcomes against peer institutions, programs, or nationally normed data.

  38. Observe & Measure Tools & Techniques • Tools and Techniques • Rating Scales • Rubrics • Simple checklist • Self reflection • Holistic scores • Prompts • MC tests • Interpretive exercise • Comprehensive exams • Portfolios • Pre-graduation surveys • Placement rates • Retention rates • Graduation rates

More Related