1 / 15

S519: Evaluation of Information Systems

S519: Evaluation of Information Systems. Analyzing data: Rank Ch9-p171. Synthesizing for „ranking“. What are „ranking“ evaluations? Examples? Difference comparing with „grading“ evaluation?. Qualitative and quantitative. Qualitative Qualitative weight and sum (QWS) Quantitative

hardenr
Download Presentation

S519: Evaluation of Information Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. S519: Evaluation of Information Systems Analyzing data: Rank Ch9-p171

  2. Synthesizing for „ranking“ • What are „ranking“ evaluations? • Examples? • Difference comparing with „grading“ evaluation?

  3. Qualitative and quantitative • Qualitative • Qualitative weight and sum (QWS) • Quantitative • Numerical weight and sum (NWS)

  4. Numerical Weight and Sum (NWS) • It is a quantitative synthesis method for summing evaluand performance across multiple criteria. • It includes • Assign numerical importance weight and a numerical performance score to each criteria (dimension) • Multiply weights by performance scores • Sum these products • The summing result represents the overall merit of the evaluand

  5. Numerical Weight and Sum (NWS) • It fits for • There are only a small number of criteria • There is some other mechanism for taking bars into account (why) • There is defensible needs-based strategy for ascribing weights.

  6. Training program evaluation • A comparative evaluation on three different interventions for training managers • A mountain retreat featuring interactive sessions with multiple world-class management gurus • An in-house training and mentoring program run by human resources, • A set of videos and latest book on management from management guru Peter Drucker

  7. Training program evaluation • Needs assessment for this evaluation • Bear in mind that this is a comparison evaluation • How do you want to compare these programs, what are the key features of the programs • Identify the dimension of merit (Process, Outcomes and Cost) • Decide the importance of the merit (giving weights to merits, based on needs?) • See Table 9.8

  8. Training program evaluation • Next steps • Data collection (what are your experiences for your project data collection?) • Data analysis • Rate their performance based on pre-defined ratings: excellent, very good, good, fair, or poor) (see Table 9.9 for this example) • Convert weights into numbers (see Table 9.10) • Convert ratings into numbers (see Table 9.10) • Synthesis step (how? See Table 9.11) • How to interpret Table 9.11

  9. Exercise • Do it by your own hand: • Converting Table9.9 to Table9.10 (defining your own numeric value for importance and grading scales) and try to find out which program is the best comparing with others. • If suddenly, the cost criteria become extremely important, will this change the final result? • Work on your own • Form the pair and discussion • Pros and cons for NWS?

  10. Qualitative Weight and Sum (QWS) • It is non-numerical synthesis methodology for summing the performances of an evaluand on multiple criteria to determine overall merit. • It is a ranking method for determining the relative merit of two or more evaluands • It is not suitable for grading • It fits for • Personnel selection, products/service/proposal selection

  11. QWS • Step1: Determine importance in terms of maximum possible value • How (see Chapter 7, six strategies) • Table 9.12 (compare with Table 9.8) • Step2: Set bars • Bar is the cut point between acceptable and unacceptable criteria. Such as: • Too expensive to afford • Too long away from their work

  12. QWS • Step3: Create value determination rubrics • Rubrics are level-based (see Chapter 8) • Description on each level, how to deal with bar? • Unacceptableno noticeable valuemarginally valuablevaluableextremely valuable • Such as what performance would look like at each level • Each dimension can have its own rubrics or each group of dimensions can have their own rubrics • Each group of questions can have their own rubrics • Synthesis step can have its own rubrics • Example: Rubric for rating finanical cost of training (see table 9.14)

  13. QWS • Step4: Check equivalence of value levels across dimensions • The validity of the QWS method is highly dependent on ensuring the rough equivalence on the value levels defined for each dimension • For example, whether table 9.14 and table 9.15 have the roughly equivalent value levels • How to do that? Put them into a matrix. • See table 9.16

  14. QWS • Step5: rate value of actual performance on each dimension • Rating table 9.9 according to rubric (table9.16) • See Table 9.17 • Step6: tally the number of ratings at each level and look for a clear winner • For each program, how many symbols they got? • Throw out programs with unacceptable ratings, see whether there is a clear winner?

  15. QWS • Step7: refocus • Delete the rows with similar score (see table9.18) • Count how many symbols each of them got • Can we find the clear winner? • Yes or no? • Why? • How should we go further?

More Related