The European Union just published a new report on development cooperation: Does the EU deliver? provides a detailed analysis of the EU institutions’ and Member States’ performance in the implementation of the effectiveness principles as agreed by the international community over the past ten years and more. For development actors closely following development cooperation patterns, the document offers fresh insights about EU donors’ performance as well as adjusting the monitoring framework. It is a valuable reference for continuing conversations on achieving the Sustainable Development Goals, as we draw closer to 2030.
The report goes beyond the Global Partnership for Effective Development Cooperation’s (GPEDC) Global Progress Report findings by, for instance, providing an EU collective indicator and qualitative interviews as well. This way, it provides a better picture of how relationships between the EU and its development partners work in practice. Possibly more importantly, it raises issues that should interest all development practitioners.
Let’s go back to July 2019, at the GPEDC Senior-Level Meeting. In the latest Global Progress Report, the Co-Chairs’ Statement pointed to “a mixed picture” on the implementation of the effectiveness principles. More specifically, it advocates “further action” to improve the alignment of development cooperation with partner priorities and country-owned results frameworks, and to promote transparency¹. Against such a background, the EU takes a deep dive to present a comprehensive picture that help stakeholders understand where more efforts are needed.
Some areas of the GPEDC monitoring framework have come under greater scrutiny than others as the review is about donors’ performance, namely: alignment and Country Results Frameworks (CRFs), forward visibility and predictability, use of country systems, and tied aid. The substance of the report revolves around three chapters: country leadership (chapter 3), significantly linked to the SDG 17.5.1 and country policy space; transparency (chapter 4); and the drivers behind the EU’s performance (chapter 5), the most political section. It also features the questionnaire submitted to EU Member States and 17 country profiles.
Data-wise, one distinctive feature is the approach to assess the EU’s combined performance (EU institutions and Member States together): 1,756 projects have been pooled together² in a single database that generates the EU collective indicator. This way, regional trends are not derived merely from a combination of national averages. The granularity of data is improved by a review of the findings through a categorisation of donors (DAC, vertical funds, multilaterals, etc.), of development partners (fragile, Africa, LDCs, etc) as well as of instruments, channels, and sectors.
It is not possible to summarise here the richness of the report. For now, we need to take stock of the fact that there are signs of regressions overall. The EU’s performance is not improving or has worsened in some areas: short and medium predictability, use of partner country systems, transparency, use of nationally owned indicators and shared evaluations with partner countries. The essence of the problem that the report explores is why there are such negative shifts. Which drivers are behind such negative trends?
According to the qualitative analysis, there is still general support for the effectiveness agenda within the EU Member States. From this angle, the volatility in performances from 2016 to 2018 cannot be explained with donors changing their policies as dramatically as it would be required to bring about such ample shifts. The report does take into account several drivers that may be at the play. In fact, findings do explore implications from trends such as those regarding the prevailing political priorities that may be negatively affecting the realisation of the effectiveness principles. The report notes that focus on migration, climate change, and trade interests may conflict with the effectiveness agenda.
However, there is also another line of thinking from the EU report that calls into question the quality of data and of the very same reporting process under the GPEDC. As dry this argument may sound, it may embody some of the essential issues to development partnerships as well as to the monitoring process as it is now. It is worth recalling that the GPEDC has been established under a very powerful credo that called for the new partnership – replacing a system based in the OECD DAC – to be global light and local heavy; one major implication was a monitoring process in principle largely based on data coming from partner countries. The EU report is then voicing trust issues that speak to the quality of the data systems of partner countries and to the limitations to the GPEDC reporting that do not allow for proper data vetting and validation; we have to notice that such an approach is already gaining momentum once we consider that the GPEDC is in the process to review its own evidence offer.
However, fractures may run deeper as the report’s conclusions plant the seeds of doubt as to development cooperation effectiveness and effective programming: the two things may not be 100% overlapping in the eyes of many leading officials. Many factors may be at work simultaneously here, starting with diminished familiarity with the effectiveness agenda and the GPEDC workings in particular, greater political pressure shaping development priorities, and donors’ diminished tolerance to partner countries’ capacity and management issues. The fate of budget support may offer a telling story in this regard. Practical ways out may include standard maintenance of the Busan monitoring framework and some major changes as well, specifically in those areas that are most problematic such as use of country systems, which, according to the EU report, may require major fixes.
The EU has taken bold steps to look into its performance in the implementation of the effectiveness agenda by publishing Does the EU deliver?. As we need other opportunities for deeper conversations on the findings, the importance of fact-driven, evidence-based policies is one overarching message for everyone to ponder; efforts to improve global, comprehensive, and regular reporting on effectiveness indicators and principles can never be emphasised enough. All development stakeholders, but especially civil society, can take advantage of the EU report, and use our learnings to inform our ways of engaging with the European Union in particular, and pursuing the effectiveness agenda in general.###