1 / 39

Tools 4 Analysis of priorities/ Scores

Use the AIPP Matrix Ranking tool to assess and analyze the priorities of different participants in a project intervention. This tool allows for changes in priorities and quantifies preferences through matrix ranking and scoring.

greent
Download Presentation

Tools 4 Analysis of priorities/ Scores

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Tools 4Analysis of priorities/ Scores Vanda Altarelli AIPP

  2. Matrix Ranking • To allow for changes in different people's priorities due to project intervention. Participants can rank, for example, the different tree species /type of training desired (whatever is being assessed), on the basis of the criteria they have chosen. While matrix ranking (first, second, third, etc.) gives an indication of relative preferences, scoring (placing on a scale • of 1–10) introduces a greater element of quantification to the preferences

  3. Steps in Matrix Ranking • 1. Organise at least two separate focus groups: one of women and one of men. Make sure that a mix of socio-economic groups is included in each. • 2. Discuss with the participants the attributes of the issue to be ranked. For example, jointly determine different attributes of a fodder species (e.g., giving good milk yields; giving high butterfat; eaten by cows/buffaloes/goats by preference, etc.) should be determined. It is easiest if these are positive criteria.

  4. Steps in Matrix Ranking • 3. Reach agreement on what criteria should be used. Take notes on the reasons for preferences discussed by the participants. • 4. Facilitate a decision on whether to conduct the ranking individually or as a group. The participants should be comfortable about participating, and able to discuss the issues freely and express their opinion. • 5. Prepare the matrix on a large paper or drawn with locally available material (a stick/chalk) • on the ground

  5. Steps in Matrix Ranking • If the participants are not familiar with the written language, use graphic representation of the criteria. Ranking/scoring can be done using stones, grains or any other locally available material. Participants often feel more comfortable if they can change • their minds; from this point of view, using stones or similar material is better than marking scores with pens. • 6. Facilitate the ranking/scoring, ensuring full discussion of the reasons for the different scores.

  6. Steps in Matrix Ranking • 7. If the exercise is done in different groups, consolidate the ranking/scoring. Analyse and • discuss the findings in the group, with the reasons given for a certain preference. • Lets take an example from a training programme by the Rural Education Programme in Karnataka that provided capacity building inputs to sheperds, with special focus on women. The families reported that the intervention has brought them closer together, through the intensive time that they shared together in the training.

  7. Matrix Ranking... • The project aimed at enhancing the knowledge and skills of women by training them in modern wool processing techniques over a period of six months. The training was conducted separately in each village, and the timing was adapted to the needs of the trainees. Training the women was challenging, most of them lacking confidence at the beginning, and doubting whether the training would bring them any benefits. After completion of the training, a participatory impact assessment of the training was done

  8. Ranking... • The facilitators initiated the discussion on the topic and divided the exercise into four parts: • ● major learnings from the training • ● period, participation and training methodology adopted in the training • ● benefits gained from the training • ● difficulties faced during the training. • They then introduced the idea of criteria for evaluating the training. Participants listed various criteria.

  9. Ranking... • After in depth discussion a number of important ones were identified by consensus. The participants ranked the criteria and gave a score in the range of 1 (lowest) to 4 (highest) using seeds to give their scores. The findings were listed for further reflection. Any points that were not clear were then discussed and clarified with the participants. In some situations, community members may be most comfortable with ranking, rather than assigning specific numerical values (scoring)

  10. Ranking... • Pair-wise ranking (comparing two attributes and placing one above another) may be particularly easily understood. However, in other situations – as in this case – people may be more comfortable with a system of numerical scoring. The important thing is for the facilitator to ensure that everyone understands and feels at ease with the method used. The majority of the women rated all the lessons learned with the highest score . The main message was that the majority of the trainees gained good skills in modern spinning, as well as other skills such as marketing, charaka repair, etc.

  11. Skills Learned

  12. Benefits from the training

  13. Difficulties during the training

  14. Matrix Ranking… Some of the criteria identified by the participants were unexpected. For example, the facilitators had not expected the participants to be so enthusiastic about charaka repair; they had thought that the women would make use of service providers. Yet, in fact, they were happy to be self-sufficient in maintenance matters. Tips While working in a large group, care should be taken that there is no dominance by a few individuals, which may result in large variation in ranking.

  15. Matrix Ranking… If the number of participants is large, it is better to work in smaller groups (in this case two groups were formed – the facilitators ensuring that the more vocal participants were spread between the two groups), and arrive at comparative perspectives. The scores in matrix method are best given with stones, seeds, pebbles, etc., particularly when working with participants who are not literate.

  16. Matrix Ranking… The main issue here is flexibility. An advantage of using seeds and stones is that it allows the participants to modify the scores easily, after further reflection. Scoring in writing tends to be perceived as fixed, once put on paper, even if it is realised that the rank/score should be modified.

  17. Well Being Ranking When conducted sensitively, this exercise can provide insights to both outsiders and village members on household differences within the community. Well being ranking is an extension of the concept of wealth ranking; the latter largely relates to income and physical assets, while ‘well-being’ also includes more over-arching issues like discrimination, health, access to basic needs, indebtedness, etc.

  18. Well Being Ranking Well-being rankings may be used in the process of planning, as a means of discussing differences between families in a village and identifying specific target groups in a participatory manner. They may also be used to explore, monitor and evaluate the impacts of any intervention in terms of poverty alleviation or changes in perceived well-being. Wealth and well-being rankings are usually only possible once a good level of rapport has been established with the community members.

  19. Steps in Well Being Ranking 1. Facilitate a discussion on the concept of well-being with the participants, asking them to define it in local terms. What are the attributes that make up well-being, in their eyes? 2. Having reached an agreed set of local criteria to define well-being, introduce the concept of well-being ranking, and the reasons for conducting it in this particular case.

  20. Steps in Well Being Ranking 3. Ask the participants to consider the households in their village, and how they might be grouped in terms of well-being. This should be done in a non-personal, non-threatening manner, seeking general groupings, rather than the identification of individual households and then classifying them. 4. Ranking may be done with seeds, stones, or grains – with participants deciding on the number of households to be placed in each pile, and having the possibility to change their minds.

  21. Steps in Well Being Ranking 5. If individuals conducted the ranking separately, they should then share their results with the others. The aim should be to reach a collective result, through discussions. Where people are at ease with classifying individual households according to their well-being status, this may be a logical next step – but only if there is genuine openness to doing so. The example of the effects of pond renovation from Tamil Nadu

  22. Pond Renovation Questions asked comprised the following: 1. What were the benefits now derived from the pond? The participants were asked to list these, and rank them in order of importance 2. How did they define ‘well-being’? How did the households in the village vary according to wellbeing? How could households in the village be grouped? 3. What contributions had been made to the pond renovation by the households in the different well-being categories? 4. Which benefits were enjoyed by which categories of households?

  23. Well Being Ranking

  24. Pond Renovation The listing out of the contributions made by households according to well-being ranking indicated that the more well to do households had contributed proportionately more to the pond renovation, through, for example, supplying a tractor for two days, paying a donation of Rs500, or providing stones. Those in the poorer categories provided a small donation of Rs10-20, and contributed labour to the extent possible. A matrix showing which benefits were enjoyed by which category indicated that households in the middle level categories seem to have benefited most, although, everyone had received some benefits.

  25. Pond Renovation The poorest households did not enjoy what were identified as the two most important benefits – drinking water for, and washing of, livestock – because they have no animals. However, poorer households had gained opportunities for nursery work and fish rearing, had access to water for clothes washing, and like everyone else were looking forward to an unexpected potential side-effect of the pond renovation.

  26. Community Score Cards • The Community Score Card is a monitoring and evaluation approach that enables beneficiary community members to assess service providers and to rate their services/performance using a grading system in the form of scores. It is used to solicit user perceptions on quality and satisfaction of facilities/services, transparency and general performance of the service provider in order to pinpoint defects and omissions both in service and facility delivery so as to improve upon service delivery

  27. Why CSC? • Service providers need to be assessed to enable them evaluate their own services. It is best to allow beneficiaries themselves to do the assessment since they can give authentic information about their own satisfaction than anybody else. The exercise also offers the service provider an opportunity to measure the level of satisfaction of his services to the beneficiaries. It also challenges him to look back and correct anomalies and defects. In the end, community members are empowered to demand accountability from service providers through the use of this method

  28. Key Steps • 1. Hold stakeholders briefing .Indeed the person organising the CSC evaluation needs to explain thoroughly to the service providers the need and purpose/objectives of the exercise. This will prevent any form of antagonism or fear of blackmailing . • 2. Collect supply-side information from service providers for input tracking at the community level. Two or more focus groups are held to validate inputs from the service providers. • 3. Hold general community meeting to explain the exercise and its purpose .Develop themes and indicators.

  29. Key Steps.. • 4. Select indicators for the evaluation with the community members themselves • 5. Create focus groups representative of different categories • 6. Set a range of scores with the community members. They will use these scores for each indicator to assess their level of satisfaction with a particular service. (E.g. 1 = Poor, 2 = Average 3 = Good etc) • 7. Service providers do a self-evaluation using indicators developed by community members.

  30. Key Steps… • 8. Generation of community (clustered) opinions, referred to as Community Cluster Scorecards, through the focus group discussions.The individual scores will then be collated and the group average score computed to represent the clustered opinions. These could also be further computed and the community average score representing the community overall opinion would be known. The opinions can then be clustered at project level to produce an average score.

  31. Key Steps.. • 9. Interface between service/facility provider and community, at the community level to ensure that the feedback from the community is well noted by presenting the scorecard and self evaluation and measures taken to correct whatever wrongs there may be in relation with the service delivered. • 10. Hold project level forum comprising service providers, politicians and community representative from the various communities assessed. Project Scorecard is presented for issues to be discussed and commitments to be made.

  32. CSC • 11. Sustain the CSC system by institutionalizing it within the various authorities and institutions that may have some roles to play in the sector. This means that the exercise should not be just a one-off activity but must become part of the routine M&E activities of the service/facility provider. • Benefits • It is very simple It offers the opportunity for beneficiaries of services/facilities to assess the provider • It offers an opportunity for the provider to review his/her strategy in planning for other projects • • It enhances the confidence in the provider especially when the scores are high

  33. Benefits of CSC. • It enhances confidence and zeal in the beneficiary to have a voice and a hand in project design and implementation • It promotes accountability in service and facility delivery • It promotes sustainability of projects. • Lessons • Community members (beneficiaries) are usually keen to contribute to project monitoring/ evaluation • • Beneficiaries are mostly eager to speak out and criticise projects and prescribe or suggest what they feel is the best for them

  34. CSC… • Service providers are usually willing to be criticized and are prepared to listen to what the beneficiaries say about their projects • The facilitator of the CSC process should approach the system with tact to avoid any suspicion of blackmail or antagonism especially from the service provider • It is a very effective way of assessing projects especially in terms of beneficiary satisfaction

  35. Citizens Report Cards • It is another tool to assess the quality of services provided. It started in Bangalore (1993) to monitor government services in terms of efficiency and accountability. The exercise gathered citizen feedback on performance of public agencies and disseminated the findings to the citizenry, thus exerting public pressure on the agencies to initiate reforms. A seven-point rating scale facilitated quantification of citizen satisfaction levels with regard to service delivery, dimensions of corruption, staff behavior, and so forth. The exercise was repeated in 1999.

  36. CRC.. • It seeks to answer the following questions: How satisfactory are the public services? Which aspects of the services are satisfactory and which are not? • What are the direct and indirect costs of acquiring these services? • The sample for CRC needs to be stratified by several categories: age, gender, location and socio-economic status. • A pre-tested questionnaire is prepared to elicit user responses on the overall satisfaction of service delivery, along with other dimensions, such as: (a) staff behavior; (b) number of visits required to complete a task; (c) frequency of problem resolution; and (d) information provided.

  37. CRC • A seven-point rating scale (7 for “highly satisfied” and 1 for “least satisfied”) enables quantification of responses. The responses are transferred into a computerized database and analyzed using a software package. The end product is a set of scores that enables ranking and comparing the public ratings of agency services. The ratings are shared with senior agency officials and publicized among the population. • The exercise is repeated several years later and the same agencies are rated to see the changes occurred.

  38. CRC • It can be a very empowering tool, if the results are widely publicized through media and several meetings and seminars are organised with not only representatives of service providers but also high profile political figures, NGOs and community representatives. This stimulates civil society and citizens groups to put pressure on governments to improve performance. A useful follow up action is to provide training to civil society organizations and citizens groups to the use of the report card system. In Bangalore, for example, Swabhimana A Citizen-State Forum for a Clean, Green, and Safe Bangalore was created

More Related