Monitoring and evaluation (M&E) is a project management technique that is an integral part of any programme cycle. It includes the gathering and analysis of information, and the reporting of processes and outputs. In this study session, you will learn how M&E can be used to assess progress made in sanitation and waste management.
Any M&E system needs to ensure that the programme implementation is carried out as planned and is achieving the aims and objectives to an acceptable quality, and in the planned time period. The system should also provide assurance that sustainability and management issues are being addressed and that supporting organisations such as local community groups are in place and functioning.
The World Bank (2004) summarises the advantages of M&E as ‘better means for learning from past experience, improving service delivery, planning and allocating resources and demonstrating results as part of accountability to key stakeholders’.
When you have studied this session, you should be able to:
15.1 Define and use correctly each of the terms printed in bold. (SAQs 15.1, 15.2 and 15.4)
15.2 Explain the difference between monitoring and evaluation. (SAQ 15.2)
15.3 Describe the purpose of M&E and explain why it is important. (SAQ 15.3)
15.4 Identify the data and methods that can be used to monitor and evaluate the performance of urban sanitation and waste management schemes. (SAQ 15.4)
Monitoring and evaluation are critically important aspects of planning and management of any programme. Monitoring is the systematic and continuous assessment of the progress of a piece of work over time, in order to check that things are going to plan. Evaluation is an assessment of the value or worth of a programme or intervention and the extent to which the stated objectives have been achieved. Evaluation is not continuous and usually takes place periodically through the course of the programme or after completion. Together, monitoring and evaluation are a set of processes designed to measure the achievements and progress of a programme. The two terms are closely connected and are frequently combined with the result that the abbreviation M&E is widely used.
A town health office is interested in finding out how many families practise solid waste sorting and reuse at household level. Is this monitoring or evaluation?
This is monitoring because it is an on-going activity concerned only with counting the number of something, in this case how many families were sorting their waste.
Programmes, projects and other interventions can be described in five stages, as shown in Figure 15.1. The inputs, on the left, are the resources (funding, equipment, personnel) and activities that are undertaken. The results, on the right, are the outputs, outcomes and impacts (see Box 15.1).
An effective M&E system measures the inputs, outputs, outcomes and impacts resulting from implementation of a programme. To provide useful knowledge these results need to be compared with the situation before the programme started, which requires baseline data. Baseline data gives information about the situation at the start of an intervention (the baseline position) and provides a point of comparison against which future data, collected as part of a monitoring process, can be compared. Progress can be evaluated by comparing the two.
There are several words used in M&E that can be confused. They sound similar but have important differences in their meaning.
It’s very important to plan monitoring activities during the earliest stages of project development — they should be integrated into project activities and not be added on as an afterthought. Monitoring requires regular and timely feedback in the form of reports from implementers to project managers so they can keep track of progress. These reports provide information about activities and what has been achieved in terms of outputs. They also include financial reports that give information on budgets and expenditure. With this information, managers can assess progress and plan the next steps for their project.
A well-managed M&E system will allow stakeholders to:
Reporting on monitoring activity is essential because otherwise the information cannot be used. It is no use collecting data and then filing it away without sharing it. As noted above, one of the reasons for undertaking M&E is to inform decision makers and enable lessons to be learned and therefore they need to be provided with the information in a timely way for that benefit to be realised.
As you have seen, a key part of monitoring is the gathering of data.
Data can be classified into two types. Factual information based on measurement is called quantitative data. Information collected about opinions and views is called qualitative data.
Suggest examples of quantitative and qualitative data that could be collected about open defecation in a kebele.
Collecting data about the change in the proportion of people practising open defecation is an example of quantitative data. An example of qualitative data could be assessing people’s views about the reduction of open defecation. You may have thought of other examples.
If you look back to Figure 5.1 in Study Session 5, you will find an example of quantitative monitoring data. The WHO/UNICEF Joint Monitoring Party data for sanitation coverage is compiled from monitoring programmes in countries all over the world.
Monitoring is a continuous or periodic review of project implementation focusing on inputs, activity work schedules and outputs. It should be designed to provide constant feedback to ensure effective (the extent to which the purpose has been achieved or is expected to be achieved) and efficient (to what degree the outputs achieved are derived from well organised use of resources) project performance. Monitoring should allow the timely identification and correction of deviations in a programme. It should provide early warning or the opportunity to remedy undesirable situations before damage occurs or gets worse.
Monitoring consists of three related activities, which are:
Monitoring should be a continuous process of regularly and systematically reviewing achievements, performance and progress towards the planned objectives of a programme. This will require a schedule for monitoring activities that should be prepared at the start and reviewed regularly. For example, a typical schedule for monitoring at the Woreda Health Office level would be part of an annual plan and might include:
An effective monitoring programme needs precise and specific measures that can be used to assess progress towards achieving the intended goals. These are called indicators. An indicator is something that can be seen, measured or counted and that provides evidence of progress towards a target. Some examples of basic monitoring indicators for urban sanitation and waste management are:
These are all examples of indicators that could be used to monitor progress towards specific programme targets.
The terms ‘performance indicator’ or key performance indicator (KPI) are often used by organisations to describe measures of their performance, especially in relation to the service they provide, and how well they have met their strategic and operational goals. KPIs can be measures of inputs, processes, outputs, outcomes or impacts for programmes or strategies. When supported with good data collection, analysis and reporting, they enable progress tracking, demonstration of achievement and allow corrective action for improvement. Participation of key stakeholders in defining KPIs is important because they are then more likely to understand and use them for informing management decisions.
KPIs can be used for:
Sometimes too many indicators may be defined without accessible and reliable data sources. This can make the evaluation costly and impractical. There is often a trade-off between picking the optimal or desired indicators and having to accept indicators which can be measured using existing data.
The Ethiopian WaSH M&E Framework and Manual (FDRE, n.d.) uses the following KPIs for sanitation and hygiene:
The percentages are calculated from the data collected during monitoring surveys. For example:
Percentage of households with a functioning latrine =
The advantage of KPIs is that they provide an effective means to measure progress toward objectives. They can also make it easier to make comparisons. For example, different approaches to a common problem can be compared to find out which approach works best, or results from the same intervention in a number of districts can be compared to find out what other factors affected the outcomes. It is important for KPIs to be carefully defined so they can be applied consistently by different organisations (Jones, 2015). For example, the definition of ‘functioning latrine meeting minimum standards’ in the KPIs listed above, should specify exactly what the minimum standards are. Without precise definitions, survey data could be collected and interpreted by different people in different ways which would make comparisons meaningless and useful analysis and evaluation impossible.
Evaluation should answer why and how programmes have succeeded or failed and allow desired changes to be planned for improvements in implementation. It is an activity that should allow space to reflect upon and judge the worth (value) of what is or has been done. Evaluation can be seen as a cyclical process as shown in Figure 15.2. You should read this diagram by starting at the top and then following the arrows down the middle. From the top, data are collected, then analysed and then evaluated. If the aims of the programme are being achieved (the arrow to the right), no action is required but results should be reviewed and feed into the next evaluation if there is one. However, if the aims were not achieved (the arrow pointing down), action is required in the form of review of the management plan and changes to the way the programme is implemented. This will then require further evaluation in due course (the arrow going back up to the top) to assess the success of the revised programme.
Unlike monitoring which is a regular activity, evaluation will be conducted only when there are evaluation questions that need to be answered. There is no fixed schedule for this but it may happen:
Evaluation questions may arise from monitoring data or any other observations that lack in-depth information to explain observed levels of performance or effectiveness. In such cases, evaluation provides useful data on how and why programmes succeed or fail.
Evaluation as an activity may be related to processes, outcomes or impacts of a programme.
Process evaluation, as the name suggests, looks at process questions and can give insight into whether the project is on track or not, and why. At the end of a project, it would involve a review of all the project processes, from start to finish. Process evaluation aims to explain why things have happened in the way they have and answers questions such as:
Outcome evaluation is the assessment of what the intervention has achieved. For example, if the intended outcome was to reduce open defecation in a target population, evaluation questions could be:
In most cases, outcomes or impacts are influenced by more than one factor and by other changes in a situation. For this reason, an outcome evaluation needs to be designed so that it is possible to estimate the difference between the current outcome level and that expected if the intervention was not in place.
Impact evaluation is the systematic identification of the effects (positive or negative and intended or not) on individuals, households or communities caused through implementation of a programme or project (The World Bank, 2004). Impact evaluations can vary in scale. They may be large surveys of target populations that use baseline data and then a follow-up survey to compare before and after. They could also be small-scale rapid assessments where estimates of impact could be obtained from a combination of group interviews, focus groups, case studies and available secondary data.
Impact evaluation can be used to:
The advantages of impact evaluation are that it provides estimates of the magnitude of outcomes and analyses the impacts for different demographic groups, households or communities over time. It should show the extent of the difference that a programme is making and allow plans for improvement to be made. However, it needs competent managers and some approaches can be very expensive and time-consuming.
One year after a hygiene promotion programme has ended, the Regional Health Bureau is interested to see if child health has improved in the woredas where the programme was implemented. Is this monitoring or evaluation?
This is evaluation because it is concerned with the impacts of a particular programme. However, monitoring is also involved because the data collection required for the evaluation would probably have come from regular monitoring reports.
There is a wide range of tools available which can be used to generate information for monitoring and evaluation purposes.
Think back to Study Session 3 and list the main methods that can be used to gather data from communities and individuals about their access and use of sanitation and waste management services.
The main methods are:
Large-scale monitoring programmes can generate enormous amounts of data. Collating the data and organising it in a way that is meaningful for evaluation or other purposes is a significant task. This is the purpose of a management information system (MIS). An MIS is a computer-based system that provides tools for collecting, organising and presenting information so that is useful for managers and other stakeholders.
In Ethiopia, there are two national monitoring systems that are relevant to urban sanitation and waste management. The Health Management Information System/Monitoring and Evaluation(HMIS/M&E) is used to record data from routine services and administrative records across all woredas and all health facilities throughout the country (MoH, 2008; Hirpa et al., 2010).
In the WASH sector, the National WASH Inventory (NWI) is a country-wide monitoring programme that was initiated in 2010/2011. Its purpose is to provide a single comprehensive set of baseline data about water, sanitation and hygiene provision for the whole country. The early phases of data collection used paper-based surveys and questionnaires but later phases have moved to a system of collecting data using smart phones (as long as there is service) which is much quicker and more efficient. The WASH MIS has been developed to collect monitoring data and to enable production of reports from national to woreda levels.
There is a lack of coordination between the HMIS and WASH MIS and this is recognised as a problem. In addition, at present, there is greater emphasis on water supply than there is on sanitation and hygiene, and currently there is no national monitoring of solid waste management. Recent developments such as the One WASH National Programme, which you read about in Study Session 1, are signs of the move towards more collaborative and integrated working in the sector which will bring many benefits.
In Study Session 15, you have learned that:
Now that you have completed this study session, you can assess how well you have achieved the Learning Outcomes by answering these questions.
Match the following words to their correct definitions.
Using the following two lists, match each numbered item with the correct letter.
quantitative data
impacts
process evaluation
baseline data
qualitative data
outcomes
indicator
impact evaluation
outputs
a.identifying the effects on individuals, households or communities caused through implementation of a programme or project
b.a way of determining if a programme is on track to meet its aims
c.the things produced or objectives achieved by a project or programme
d.something that can be seen, measured or counted, providing evidence of progress towards a target.
e.data collected at the start of an intervention to provide a point of comparison against which future data can be compared
f.effects of an intervention, usually in the short to medium term
g.the long-term effects of a project or programme
h.measureable factual data
i.information collected about views and opinions
Which of the following statements are false? In each case explain why it is incorrect.
B is false. Monitoring should be a continuous process, not just an occasional one.
E is false. Evaluation could, and probably should, be done at the end of a project but it also important to evaluate at other times.
Give three reasons for incorporating plans for M&E during the early stages of a project’s development.
Three possible reasons for incorporating plans for M&E during the early stages of a project’s development are:
You may have thought of other reasons.
Explain why an indicator based on quantitative data will be more useful than an indicator based on qualitative data. Use examples of indicators in your answer and describe how you might measure them.
For an indicator to be a useful tool for assessing a situation, it has to be something that can be seen, counted or measured. It also should be precisely defined. Quantitative data is factual information based on measurement so that would meet the requirements for an indicator. For example, the number of people using a latrine could be counted by observation on a small scale or, on a larger scale, could be measured by asking people about their habits using a questionnaire or in interviews. For the survey method, it would be important to specify exactly what type of latrine was being used.
Qualitative data would be much harder to use as an indicator because it is not so easy to measure. For example, you could gather qualitative data about the reasons why people did or did not use the latrine. This could be useful information but would not be a helpful indicator because it would not produce simple numerical answers.