Blogs/Interviews

Unfolding the Simplicity in Development Results

By TAI (Role at TAI)
pexels-photo-669615.jpeg

What impact are you making? How well are you prepared to measure your impact? How do you measure impact? When do you expect to start seeing impact coming through? How do you know this impact is as a result of your intervention? How do you draw a line between what you have done, what other players have done, and what the environment has supported in the realization of the impact made? These, and several others are common questions in MEL and the development circles. In my view, they don’t have direct or simple answers.  

Besides, it seems the situation is becoming increasingly hard, especially now as the world becomes more like a village, where there are so many actors and theories vying for space in the development world. Further complicating the matter is establishing a common understanding between funders, implementers, and key stakeholders on some results measuring questions. Some succeed in defining the results in their own program but fail to carry that over throughout the program/project life and beyond.

Development results that are best expressed as [1]“impact and outcomes,” play a big role in defining the success of the program/project, nevertheless, the challenge has been how to keep them in sight during and after the project life. Development practitioners have come up with countless initiatives in trying to resolve the prevailing challenges, however, the gap in measuring the results in terms of impact and outcomes is still a practical matter amongst practitioners, both in the developed and developing world.

This is experienced, particularly on programs and projects that intend to influence policy-making, systematic change, advocacy, behaviors, and practices; as they operate in increasingly complex environments, which challenge the measuring of outcomes and impacts. There are different scenarios, for instance, some organizations are well geared towards short-term outcomes and experience difficulties in a long-term perspective especially after the phase-out of the program. Some experience difficulties in both short and long-term perspectives. However, it is important to also acknowledge that a few organizations have been successful in measuring results in both the short and long terms.

I have been practicing MEL for eight years now; still a baby and need to learn more.  However, I thank other practitioners due to the vast amount of knowledge available in MEL, such as different theories which complement and challenge each other, write-ups, research, and learning platforms, which provide opportunities for learning, not only in MEL but the wider development world. Throughout my years of practice, one of my key learning outcomes has been: what could possibly be simplified in dealing with the complexity in programs and projects monitoring, evaluation and learning. Particularly when realizing the results as outcomes and impacts at the beneficiary level.

And often times, I’m convinced, or rather tempted, to think the solution might be in finding the simplicity of the details of complex programs/projects or environments. Simplicity may not necessarily mean simple to implement, but rather may provide strategic directions towards resolving and dealing with complex development issues when it comes to thinking, measuring and reporting outcomes and impacts. Throughout the learning, my thoughts of the so-called “simplicity” are geared on the following:

Realism in the design of the MEL system

I think one of the aspects that are overlooked is perhaps the design stage. For many organizations, this stage is normally a great step theoretically, but not so close to the reality of the complexity of the environment. The details of assumptions, indicators, stakeholders’ roles, and the influence of the environment are often not well considered during the design phase. The project designers too often leave out the simple issues that provide a way through the complexity. For instance, if the assumptions consider only policy level, local level context, but forget the influence of beneficiaries or even the beneficiary’s influencers, it would definitely affect the implementation of the designed system. There are normally some hidden factors, which call for a deep analysis that would reveal the issues to be considered. The presence of unfolded issues after the design would affect the operationalization of the system and may jeopardize the program management and the MEL itself, and at the end of the day may not adequately reveal the reality in terms of impact and outcomes, that a program or project is achieving.

The choice of the M&E methodologies is another critical issue in the design stage. Many organizations find themselves in a dilemma of whether to go for outcome mapping, theory of change, logic model, hybrid models and many others available.  I have witnessed such kind of questions coming up in many M&E workshops; and have also observed the struggle in my own organization and country, Tanzania. Despite the fact that the purpose of each methodology is well stipulated, implementation is often challenging as a result of the organization set up and local context that the organization operates under. 

For instance, in the developing world, either the donors or peers in the industry would largely influence the decision of which MEL methodology to go for. The influence can result in a good system but the implementers may not be ready to implement for several reasons, like the knowledge, skills, resources, and the local context, which inhibit implementation.  In some instances, the system would remain on the shelf because its implementation is not feasible.

This design stage also influences the later stages, like the institutionalization of the system in the organization, implementation strategies, and even the performance of the program/project because it will not provide adequate information that will not help managers and other staff to make better decisions for the program/project.

So, I think it is important that greater thinking is invested in this design stage that will enhance the ability of the organizations to measure their results, but also to possibility learn, adapt, and change in a highly complex environment. Clear and careful thinking will provide a way to ensure effective monitoring, evaluation, and learning in the organization, and make it possible to deal with complex issues.

Some of the questions that designers should be asking themselves are: What is the content of the system and programming of the system, and how will the system be operated? What resources do we have? What capacity challenges do we have? What is the system for? Who will be responsible? What resources should be invested, how well can the alignment with the program/project be done? How should implementation plans be designed and operationalized? How can the role of the external environment, including that of stakeholders at different levels, be taken into account? How can the adaptability be taken into account should there be a need? And many other questions are important during the design stage and will depend on the organization, impact and outcomes context.

Adaptability

Adaptability planning is vital to ensuring that the system thrives and not just survives. Proper adaptability plans help the organization to remain strategic in different kinds of environments, such as in the introduction of a new law, regulations, policy, change of donors, operating with minimum resources and even change in stakeholder’s attitude or behavior. All of these can be accommodated if the organization is well prepared to adapt to the unexpected or unanticipated developments during the lifetime of the program/project.

Adaptability goes beyond risk management plans; in a way, it should handle the challenges in a wider perspective, and as such includes more than what is reflected in the risk management plan. Adaptability has to reflect the simplicity in the details of the MEL framework and the program/project. If the simplicity is ignored, it may give an indication that complex issues cannot be handled, particularly in a situation of constrained resources such as time and or a change in policy or political environment. For instance, if the MEL system fails to accommodate new donor requirements, it will perhaps fall short, in cases like a change in policy or political context, which may have a significant effect in terms of achieving and or measuring impact and outcomes.

If adaptability fails, then strategic management and positioning of the organization and its MEL system may fail. Henceforth, it is important to have a functioning adaptability plan, which will complement the measuring of the impact and outcomes of the organization in our complex development world.

Timing of the change and levels of results

Time is very key in any MEL system; it also contributes to the effectiveness of the system in measuring the impact and outcomes. As we think of how to improve measuring of impact and outcomes realistically, timing should also be considered. It is not only about grouping the levels of results into timing such as outputs in a year, outcomes in 3 to 5 years, and impact in 10 years.  Rather a thoughtful plan of how to measure the results as per identified time, which also allows for adaptability should the need arise is essential. Organizations need to carefully consider how timing, alongside other resources, is planned for the strategic management of short, medium and long-term outcomes, and ultimately impact.

Action-oriented learning

Learning facilitates the enhancement of knowledge, skills, and attitude of individuals, that’s why many organizations invest in learning. But the question remains, how can learning better our culture, operation and strategic thinking, in order to improve our thinking of impact and the measuring thereof?

In the development world, learning has become a culture, however, its emphasis on the usefulness is rather little, so to speak. Recently, I have seen more learning opportunities for MEL practitioners, which is very good for us as it enables us to enhance our knowledge and skills, but I suppose I wonder whether we could be taking more information, knowledge, and skills out of these opportunities. The recent MEL workshop[2] organized by Open Society Foundations Fiscal Governance Program – re-emphasized the action-oriented learning, which to me is a foundation to all other issues. When we fail many times, we should not be afraid to fail, but rather learn from our failures and in doing so, we become increasingly good at what we do.  The more our systems fail and the more we take the lessons to the next level, the more we slowly increase our system’s effectiveness at measuring the impact and outcomes.

Learning from others and adapting the issues learnt also improves our understanding of complex environments and situations, and moves us a step ahead in achieving our results.

Conclusion

Thinking impact from the initial design of the program/project, trying to understand the simplicity from the complexity, and carrying this culture of learning throughout the lifetime of a program/project can be resource intensive. However, it is also useful in saving massive resources that would be invested later on in the search for impact and outcomes.  The terms simplicity and complexity may vary in so many ways depending on one’s organization or context perspective, but it can add value to the measurement questions and better the achievement of results.

 

 

 

 

 

 

Beatrice Mkani is a MEL practitioner at Sikika

 

[1] UNDG, (2010) RBM handbook, available at https://undg.org/

[2] MEL Jamboree, Fiscal Governance Program- Open Society Foundations2017 report

Don't miss our latest publications Subscribe now to get our notifications