Monitoring and Evaluation
Zahid Hussain No Comments

A monitoring and evaluation (M&E) plan is a document that helps to track and assess the results of the interventions throughout the life of a program. It is a living document that should be referred to and updated on a regular basis. While the specifics of each program’s M&E plan will look different, they should all follow the same basic structure and include the same key elements.

An M&E plan will include some documents that may have been created during the program planning process, and some that will need to be created new. For example, elements such as the logic model/logical framework, theory of change, and monitoring indicators may have already been developed with input from key stakeholders and/or the program donor. The M&E plan takes those documents and develops a further plan for their implementation.

 

Why develop a Monitoring and Evaluation Plan?

It is important to develop an M&E plan before beginning any monitoring activities so that there is a clear plan for what questions about the program need to be answered. It will help program staff decide how they are going to collect data to track indicators, how monitoring data will be analyzed, and how the results of data collection will be disseminated both to the donor and internally among staff members for program improvement. Remember, M&E data alone is not useful until someone puts it to use! An M&E plan will help make sure data is being used efficiently to make programs as effective as possible and to be able to report on results at the end of the program.

 

Who should develop a Monitoring and Evaluation Plan?

An M&E plan should be developed by the research team or staff with research experience, with inputs from program staff involved in designing and implementing the program.

 

When should a Monitoring and Evaluation Plan be developed?

Monitoring and Evaluation plan should be developed at the beginning of the program when the interventions are being designed. This will ensure there is a system in place to monitor the program and evaluate success.

Who is this guide for?

This guide is designed primarily for program managers or personnel who are not trained researchers themselves but who need to understand the rationale and process of conducting research. This guide can help managers to support the need for research and ensure that research staff have adequate resources to conduct the research that is needed to be certain that the program is evidence based and that results can be tracked over time and measured at the end of the program.

Steps

Step 1: Identify Program Goals and Objectives

The first step to creating an M&E plan is to identify the program goals and objectives. If the program already has a logic model or theory of change, then the program goals are most likely already defined. However, if not, the M&E plan is a great place to start. Identify the program goals and objectives.

Defining program goals starts with answering three questions:

  1. What problem is the program trying to solve?
  2. What steps are being taken to solve that problem?
  3. How will program staff know when the program has been successful in solving the problem?

​Answering these questions will help identify what the program is expected to do, and how staff will know whether or not it worked.

Example: If the program is starting educational skill uplift for girls, the answers might look like this:

Problem High rates of unskilled Girls
Solution Skill enhancement of girls in respective community centers
Success Lowered rates of unskilled Girls in locality

From these answers, it can be seen that the overall program goal is to reduce the rates of unskilled and unaware girls in the community.

It is also necessary to develop intermediate outputs and objectives for the program to help track successful steps on the way to the overall program goal.

Step 2: Define Indicators

Once the program’s goals and objectives are defined, it is time to define indicators for tracking progress towards achieving those goals. Program indicators should be a mix of those that measure process, or what is being done in the program, and those that measure outcomes.

Process indicators track the progress of the program. They help to answer the question, “Are activities being implemented as planned?” Some examples of process indicators are:

  • Number of training held with Girls
  • Number of outreach activities conducted at girls-friendly locations
  • Number of courses taught at girls-friendly locations
  • Percent of girls reached with program awareness messages through the media or WOM

Outcome indicators track how successful program activities have been at achieving program objectives. They help to answer the question, “Have program activities made a difference?” Some examples of outcome indicators are:

  • Percent of girls turning up for course in first round
  • Number and percent of trained peer leaders from that from local community  providing services to girls
  • Number and percent of new girls enrolling in course and also, those turning up for jobs with latest skills acquired

These are just a few examples of indicators that can be created to track a program’s success.

Evaluating the performance over time:

Usually a Quasi-experimental design of pre and post comparison of the same group is applied i.e. the treatment group. The individual score will determine the status of girls’ like in our example; score on a specific time i.e. the score at pre time or post time. The difference between the two will determine the progress of girls from the pre to the post situation. This difference will be calculated through single difference and will be considered as impact of the program.

Step 3: Define Data Collection Methods and Timeline

After creating monitoring indicators, it is time to decide on methods for gathering data and how often various data will be recorded to track indicators. This should be a conversation between program staff, stakeholders, and donors. These methods will have important implications for what data collection methods will be used and how the results will be reported.

The source of monitoring data depends largely on what each indicator is trying to measure. The program will likely need multiple data sources to answer all of the programming questions. Below is a table that represents some examples of what data can be collected and how.

Information to be collected Data source(s)
Implementation process and progress Program-specific M&E tools
Service statistics Facility logs, referral cards
Reach and success of the program intervention within audience subgroups or communities Small surveys with primary audience(s), such as provider interviews or client exit interviews
The reach of media interventions involved in the program Media ratings data, broadcaster logs, Google analytics, omnibus surveys
Reach and success of the program intervention at the population level Nationally-representative surveys, Omnibus surveys, community data
Qualitative data about the outcomes of the intervention Focus groups, in-depth interviews, listener/viewer group discussions, individual media diaries, case studies

Once it is determined how data will be collected, it is also necessary to decide how often it will be collected. This will be affected by donor requirements, available resources, and the timeline of the intervention. Some data will be continuously gathered by the program (such as the number of training), but these will be recorded every six months or once a year, depending on the M&E plan.

After all of these questions have been answered, a table like the one below can be made to include in the M&E plan. This table can be printed out and all staff working on the program can refer to it so that everyone knows what data is needed and when.

Indicator Data source(s) Timing
Number of training held with Community Girls Training attendance sheets Every 6 months
Number of outreach activities conducted at Girls-friendly locations Activity sheet Every 6 months
Number of course taught at youth-friendly locations Subject Sheet / teachers manual Every 6 months
Percent of girls receiving program messages through the media or WOM Population-based surveys Annually
Percent of prospective girls willing to take skill enhancement program population-based survey Annually
Number and percent of peer leaders providing trainings to Girls Facility logs Every 6 months
Number and percent of new enrollments population-based survey Annually

Step 4: Identify M&E Roles and Responsibilities

The next element of the M&E plan is a section on roles and responsibilities. It is important to decide from the early planning stages who is responsible for collecting the data for each indicator. This will probably be a mix of M&E staff, research staff, and program staff. Everyone will need to work together to get data collected accurately and in a timely fashion.

Data management roles should be decided with input from all team members so everyone is on the same page and knows which indicators they are assigned. This way when it is time for reporting there are no surprises.

An easy way to put this into the Monitoring and Evaluation plan is to expand the indicators table with additional columns for who is responsible for each indicator, as shown below.

Indicator Data source(s) Timing Data manager
Number of training held with Community Girls Training attendance sheets Every 6 months Activity manager
Number of outreach activities conducted at Girls-friendly locations Activity sheet Every 6 months Activity manager
Number of course taught at youth-friendly locations Course sheet Every 6 months Activity manager
Percent of girls receiving program messages through the media or WOM Population-based survey Annually Research assistant
Percent of prospective girls willing to take skill enhancement program population-based survey Annually Research assistant
Number and percent of peer leaders providing training to Girls Facility logs Every 6 months Field M&E officer
Number and percent of new enrollments population-based survey Annually Research assistant

Step 5: Create an Analysis Plan and Reporting Templates

Once all of the data have been collected, someone will need to compile and analyze it to fill in a results table for internal review and external reporting. This is likely to be an in-house M&E manager or research assistant for the program.

The Monitoring and Evaluation plan should include a section with details about what data will be analyzed and how the results will be presented. Do research staff need to perform any statistical tests to get the needed answers? If so, what tests are they and what data will be used in them? What software program will be used to analyze data and make reporting tables? Excel? SPSS? These are important considerations.

Another good thing to include in the plan is a blank table for indicator reporting. These tables should outline the indicators, data, and time period of reporting. They can also include things like the indicator target, and how far the program has progressed towards that target.

Step 6: Plan for Dissemination and Donor Reporting

The last element of the M&E plan describes how and to whom data will be disseminated. Data for data’s sake should not be the ultimate goal of M&E efforts.  Data should always be collected for particular purposes.

Consider the following:

  • How will M&E data be used to inform staff and stakeholders about the success and progress of the program?
  • How will it be used to help staff make modifications and course corrections, as necessary?
  • How will the data be used to move the field forward and make program practices more effective?

The Monitoring and Evaluation plan should include plans for internal dissemination among the program team, as well as wider dissemination among stakeholders and donors. For example, a program team may want to review data on a monthly basis to make programmatic decisions and develop future work plans, while meetings with the donor to review data and program progress might occur quarterly or annually. Dissemination of printed or digital materials might occur at more frequent intervals. These options should be discussed with stakeholders and your team to determine reasonable expectations for data review and to develop plans for dissemination early in the program. If these plans are in place from the beginning and become routine for the project, meetings and other kinds of periodic review have a much better chance of being productive ones that everyone looks forward to.

Leave a Reply

Your email address will not be published.