1 / 27

Evaluation of Research and Innovation of University Academics Using Soft – Computing Methodology

Evaluation of Research and Innovation of University Academics Using Soft – Computing Methodology. Faith-Michael Uzoka. Outline. Introduction Objectives/Significance Literature Methodology Results and Discussion Conclusion. Introduction.

milica
Download Presentation

Evaluation of Research and Innovation of University Academics Using Soft – Computing Methodology

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluation of Research and Innovation of University Academics Using Soft – Computing Methodology Faith-Michael Uzoka

  2. Outline • Introduction • Objectives/Significance • Literature • Methodology • Results and Discussion • Conclusion

  3. Introduction • There is a strong relationship between academic research output and the overall development of a nation (Lootsma and Bots 1999 • A number of stakeholders have invested huge sums of money in the development of research in universities • Universities place a very high premium on research compared to teaching and service (Uzoka 2003)

  4. Introduction • Research Measurement is a complex, multicriteria and multidimensional, context specific activity (Murphy 1994 ) • Universities are different from other corporate environments, so conventional evaluation models do not fit. • Need to develop a model that attempts to capture research variables in a global perspective taking cognisance of fuzzy nature of parameter assessment.

  5. Objective • Develop a fuzzy-enhanced analytic hierarchy process (FEAHP) model for the evaluation of university academic staff research productivity Significance • Takes cognizance of the multi-criteria, multi-situation, and multi-dimensional nature of research output evaluation • Utilizes confidence measures in obtaining the ratings of decision experts on various research evaluation criteria

  6. Literature • Universities adopt corporate evaluation models [e.g. research output patents, performance-based funding, benchmarking, best practices, and total quality management] (Shafer and Caote 1992 ) • Benchmarking seems to be most prominent (Alt 2002 ). It is relevant in inter-institutional comparisons • Baughman and Goldman (1999) found a strong positive relationship between scholarship and institutional ranking

  7. Literature • Performance indicators that fail to take account of input differences across institutions are inappropriate (Johnes 1992, Neely et al. 2005) • Various indices have been proposed in the past: • Number of publications and presentations (E.g. Lopes and Lanzer 2002) • Publications including guidance and supervision (E.g. Uzoka and Akinyokun 2005) • Patents and spin off companies (E.g. Wallmark et al. 1998 ) • Citation index (E.g. Hirsch 2005)

  8. Literature • Popular Tools include: • Multi criteria analysis (E.g. Islam and Rasad 2005 ) • Fuzzy Technology (Akinyokun 2002, Royes et al. 2003 ) • Knowledge Based Methodologies (E.g. Vinsonhaler et al. 1996, Uzoka and Akinyokun 2005) • Dearth of research on useful assessment criteria and models for research of university academic staff especially in the twenty first century.

  9. Linguistic PWC of variables Decision makers’ rating of research Aggregation of weights Aggregation of ratings using FEAHP Obtaining of research evaluation model Methodology Domain experts Decision experts Derivation of variables Determination of research productivity Figure 1: FAHP Conceptual Framework

  10. Figure 2: Hierarchy of Criteria

  11. Table 1: Distribution of Experts Figure 3: Experts’ Years of Experience, Average Confidence and Overall Confidence correlation between the years of experience and overall confidence = .764

  12. Aggregation of Weights • Step 1: Standardization of Experts’ Rating Confidence • Step 2: Adjustment of Fuzzy Values by Standardized Rating Confidence • Step3: Deriving Aggregate Fuzzy Weights • Step 4: Defuzzification

  13. Measure of Group Consensus • A high degree of group consensus increases the validity of the evaluation (Chen and Liu 2005). • The group consensus measure presented in (Shirland et al. 2003) was adopted, but modified to eliminate bias that results from the use of sample, rather than population parameters. • For any expert i to be included in the analysis, his overall PWC deviation δi from the group consensus δ ≤ δimaxwhich evaluates to for i=1,2,…, n; p is the number of pairwise comparisons and n is the number of DEs.

  14. Consensus Results • δ = 6.575 • δimax = 71.302 • Percentage consensus =81.63 Table 3: Consensus Values

  15. Obtaining the Final Evaluation Model • After defuzzification, the principles of AHP were applied in deriving the final evaluation model • Let P(i,j) be a pairwise comparison of two elements i and j; • Where {i,j} Є nk (nk = node k of the FEAHP tree). • The larger the value of P(i,j), the more i is preferred to j in the priority rating. The following rules govern the entries in the PWC. • Rule 1: P(j,i) = [p(i,j)]-1 • Rule 2: If element i is judged to be of equal importance with element j , then • P(i,j) = P(j,i) = 1; in particular, P(i,i) = 1 for all i. • The Pairwise comparison (PWC) matrices were obtained by entering the individual ratings of the experts into the Expert Choice 11

  16. Table 4: Research Output Evaluation PWC Note: For shaded values, the top variable is more important than left hand side pair

  17. Synthesis - Goal (Level 1)

  18. Synthesis - Publication Type (Level 2)

  19. Synthesis - Reputation of Publisher (Level 2) Synthesis - Authorship (Level 2)

  20. Synthesis - Place of Publication (Level 2) Final Evaluation Factor Index FEFIi= 0.133(0.425IB + 0.484PB + 0.091OT) + 0.258(0.201JA + 0.075CP + 0.053TR + 0.051OP + 0.051MO + 0.156PB + 0.116 BC+ 0.048PM + 0.054IO + 0.043SW + 0.047CM + 0.034SP + 0.033GS + 0.037RP) + 0.114(0.425SA + 0.326DA + 0.154TA + 0.095MA) + 0.250QP + 0.146SF + 0.099(0.200LP + 0.302RP + 0.498IP) Research Evaluation Model

  21. Rating • Authorship (AU), reputation of publisher (RP), publication type (PT), and place of publication (PP) are fairly structured • Quality (QP) and specialization/ relevance (SF) of research are unstructured. • This study proposes an ordinal rating of the unstructured attributes through a likert type rating, which is normalized in order to fit the normalized structure of other ratings. • The ratings and normalized equivalents are as follows: excellent (EX) – 5 (0.333); very good (VG) – 4 (0.267); good (GD) – 3 (0.200), fair (FR) – 2 (0.133), poor (PR) – 1 (0.067).

  22. Numerical Example • Model was tested using an expert’s evaluation of thirty four publications of three academic staff, two from the same department, and another from a different department. • Figure 10 presents the AHP tree for the evaluation of the research output of the three academic staff. • RO1, RO2,…, ROn represents various publications of academic staff S1, S2, S3 respectively. • The raw evaluations obtained for the three academic staff are presented in Table 5.

  23. Results • Table 6 presents the final evaluation of research output after the application of the fuzzy enhanced AHP system • A multidimensional analysis of research output is presented in Table 7

  24. Discussion • publication type (PT) and quality of publication (QP) are considered the most important criteria in the evaluation process • place of publication (PP) is considered the least important factor. • journal publications are considered the most valued publication type (20.1%), while guidance/supervision is considered least valued (3.3%)

  25. Discussion • Ordinal ranking of some publication types as follows: JA>PB>BC>CP>GC. • Shows slight variation from the results obtained in (Lootsma and Bots 1999), which ranked JA>PB>GC>BC>CP. • Reason: • books seem wider in circulation and usage than postgraduate dissertations especially in recent times. • Also there is increased emphasis on teaching (Obasi 2008), which further increases the importance of books and book contributions in the evaluation process.

  26. Conclusion and Implications • This study provides a framework for quantitative analysis of university academic staff research productivity. • The model would assist international ranking agencies in reconsidering what constitutes the critical elements of institutional ranking. • A computer program could be developed on the basis of this model to assist in peer evaluations by academic heads and colleagues. • The model discussed in this study could be integrated with institutional databases of faculty research profiles for purposes of inter-faculty and inter institutional comparisons

  27. End

More Related