Completing an evaluation in unstable and shifting conditions: A success story
Representatives from The Mitchell Group (TMG) and EdIntersect released a reflection on their approach to the mid-term performance evaluation of the Selective Integrated Reading Activity (SIRA) in Mali.
The evaluation team relied on key strategies such as partnerships, flexible and responsive evaluation design, and a resolved commitment to ethics to overcome several significant real-world challenges in the Malian context. Their approach to the Mali SIRA evaluation offers strategies for funders and evaluators in international development who are pursuing work in similar contexts.
The evaluation was conducted between January and May 2020, using quantitative and qualitative methods. Evaluators faced three significant challenges: teacher strikes, violence due to local armed groups, and the covid-19 pandemic. Prolonged teacher strikes left few schools operational at the time of the evaluation, and teachers and school directors were hesitant to engage in the evaluation due to fear of retaliation from teachers’ unions. Violence and political unrest due to local armed groups affected some SIRA schools and led to some school closures. The third major challenge, the covid-19 pandemic, forced international team members to leave Mali prematurely due to border closings.
The authors suggest three main solutions for success in similar contexts: partnerships, flexible and responsive evaluation design, and a commitment to ethics and the “do no harm” principle. Intercultural collaboration was inherent to the team structure in keeping with USAID’s New Partnership Initiative, as well as cross-institutional partnership between two U.S.-based institutions, TMG, a large business, and EdIntersect, a woman-owned small business. Local systems, personnel, and partners also played a crucial role in the evaluation’s success. Therefore, the SIRA project team, a collaboration among the Education Development Center (EDC) and implementing partners, emphasized partnerships at many levels, from key education partners at the national ministerial level to regional and school/community levels.
The evaluation team’s approach also relied on flexibility and adaptability through creative uses of technology, initiatives to maintain flexibility, and a participatory relationship with stakeholders. Some strategies included using appropriate virtual platforms to distribute evaluation findings and recommendations, maintaining a comprehensive back-up school list, and allowing multiple opportunities for stakeholders to review findings before the final report. Through the process, the team underscored the rights and well-being of all those involved, including stakeholders and team members, and relied on team members’ judgments to assure the safety of all field agents.
For future evaluations in similar crisis/conflict settings, the authors put forth several considerations especially relevant during the covid-19 international crisis. Future evaluation teams should:
- show continued attention to contextual realities and the need for conflict-sensitive mechanisms that adapt the evaluation design in real time,
- integrate learning and capacity-building at all levels of the evaluation process and show specific attention to operationalizing through evaluation design and team configuration as outlined in USAID’s Journey to Self-Reliance,
- use technology for remote support from international team members to local, in-country and on-the-ground colleagues, and
- use technology for small-group processing, comprehensive discussions, and remote workshops.
This success story not only offers tools and perspectives for future evaluation teams, but also shows that fragile crisis environments should not be written off for productive monitoring, evaluation and learning work.