Attribution vs. Contribution: The Elephant in the Evaluation Room
If you’ve ever been asked: “So how do we know it was YOUR project that caused this change?”, you’ve probably felt that mix of panic and frustration many Monitoring & Evaluation (M&E) professionals know too well.
Attribution (proving your project directly caused an outcome) sounds neat and scientific. But in the real world, life is messy. Change rarely happens because of a single project. It’s influenced by politics, culture, economics, even the weather.
That’s why evaluators increasingly talk about contribution, not attribution. Instead of claiming, “We caused this change,” we ask, “How did we contribute to this change, alongside other factors?”
So how do you actually do this in practice? Here are a few tips that make the difference.
1. Map the Web of Influence
Picture your project as one thread in a much bigger web. Develop a Theory of Change or use contribution mapping to visualise the other actors and forces at play. For example, government policies, other NGOs, community norms.
👉 Practical tip: In workshops, ask stakeholders: “Who else is influencing this outcome?” Write it all down. This helps you frame your results as part of a bigger story.
2. Gather Different Kinds of Evidence
Numbers alone won’t prove contribution. Pair quantitative data with qualitative stories, stakeholder interviews, and case studies that reveal how your project influenced decisions or behaviour.
👉 Practical tip: Try contribution tracing or process tracing. These methods use “evidence tests” to rule out alternative explanations. For example, did a new government policy play a bigger role than your training programme?
3. Use Triangulation (Don’t Rely on One Data Source)
If you only have survey data, skeptics will poke holes. Strengthen your case by triangulating — combining surveys, focus groups, administrative data, and even external reports.
👉 Practical tip: If three different sources point to the same conclusion, you can confidently say your project contributed.
4. Be Honest About Limits
Decision-makers actually respect transparency. Instead of overselling impact, explain what you can reasonably claim and what you cannot.
👉 Practical tip: Phrases like: “Our project played a catalytic role alongside government investment” build credibility and keep you out of the attribution trap.
5. Shift the Conversation
Sometimes the problem is less about methods and more about mindset. Colleagues or funders may still push for attribution because it feels more definitive. That’s when you reframe:
Instead of “Did we cause this change?” ask “How did we contribute to this change?”
👉 Practical tip: Use analogies. For example: “Ending child marriage is like moving a mountain. Our programme wasn’t the bulldozer, but we were one of the shovels that helped make the path.”
In Conclusion..
Attribution vs. contribution isn’t just semantics. It’s about recognising that social change is complex and collective. When we shift from proving causality to demonstrating contribution, we tell a more honest, nuanced, and powerful story of impact.
Ready to move beyond panic when funders ask about attribution?
In my Clarity to Impact™ Programme, we go deep into exactly this, how to design MEL systems that show credible contribution, use the right mix of evidence, and tell a powerful story of impact that funders respect and communities recognise.
👉Join the October cohort of the Clarity to Impact™ Programme. We begin in a few days and we are almost full!


