April 6, 2023

ODE will not be returning. What’s the best alternative for a modern development program?

Announced in 2006, the Office of Development Effectiveness (ODE) set out to lead evaluations of Australian aid projects and set a pathway for improvements.

Abolished fairly quietly in 2020, Minister for International Development and the Pacific, Pat Conroy, indicated late last year that the ODE won’t be resurrected, saying 'I don't think we will be returning to the past'. Instead, he said that he's ‘looking at a new and improved approach’.

So what could – and should - that approach look like? To find out, we asked the experts what the best ODE-alternative is.

Jo Hall
Independent Evaluation and Performance Consultant

The new aid policy expects to elevate aid in foreign affairs, but there is a risk that the effectiveness of Australia’s aid program, which has steadily declined over the past few years, will not keep up. DFAT is quite good at counting things in terms of program delivery, but less good at measuring whether the longer-term changes they were hoping for are being achieved. DFAT also tends to overestimate how well their interventions are going so more independent oversight is needed. Good quality evaluation and monitoring need specialist skills to inform action that needs to be undertaken earlier and to support genuine learning across DFAT and partner countries if it is to help improve effectiveness.

So what’s an alternative? In my view, DFAT should engage an expert Chief Evaluator to elevate effectiveness in the implementation of the new aid policy. More independence and longevity would be assured if this was a statutory office. The Chief Evaluator would advise the Secretary and Minister on issues of effectiveness in the aid program and lead a small team of specialised staff to provide expert intervention at an earlier stage in programs. This could include:

  • assessing the likely effectiveness of new program designs
  • advising on the suitability of monitoring systems, especially their:
    - ability to track expected changes and outcomes
    - usefulness in informing management decisions
    - usability by local partners
  • undertaking regular country program evaluations and synthesising multiple evaluations to understand why different types of interventions work or don’t work in different contexts
  • sharing approaches that work across similar contexts

By elevating the evaluation position, effectiveness also has a better chance of gaining traction across the foreign affairs portfolio and potentially the development efforts of other government departments in the implementation of the new aid policy.

For seven years, Jo was the Director in the former ODE where she established the program quality systems still in place in DFAT. This followed 30 years working in international development and she is now finishing her PhD at the ANU in ways of measuring aid effectiveness. At the Lab, we love Jo’s deep commitment to understanding how things work, and asking important questions. In fact, Jo was the one who asked the Minister about ODE last year, and sparked this Intel question.

Natalia Beghin
Senior Consultant, Alinea International

While we don’t know what form an ODE-successor might take, there are three approaches any alternative should strive for:  

Give it teeth. Independence and autonomy are the bedrock of credibility, and enhance the likelihood that evaluations are conducted comprehensively and objectively. When real or perceived conflicts of interest are at play, the toughest questions don’t get asked, and programs are deprived of the opportunity to reach their full potential. All evaluation teams should be staffed (at least majoritively) by externals, and be afforded the power to publish and follow up on recommendations without the prior clearance of DFAT staff.

Don’t let the perfect get in the way of the good. Randomised control trials are every evaluator’s dream, but they’re not always the right tool for the job. Too often, robustness is prized at the expense of relevancy. This results in large spends on polished data sets that can’t offer meaningful insight into what’s actually happening, or how things can be improved. Evaluators of Australia’s aid program need the expertise to choose the right evaluation methods with confidence, and enough autonomy to go against the grain when it matters most.

Prioritise learning. To ensure evaluation findings can be harnessed for improvement, learning needs to form a central part of any assessment. This means ensuring that evaluation processes are participatory, and that they also facilitate opportunities for safe and meaningful reflection and discussion. In practice, it also requires promoting a “fail fast” approach that reframes setbacks as opportunities to enhance program performance, adaptability, and innovation. 

Natalia is a powerhouse international development expert with particular expertise in gender equality and social protection. She’s worked in Government in engagements with the UN Human Rights Council, the G20, G7, APEC, and ASEAN, for the UN and World Bank, and is currently a senior consultant with Alinea International. A dear friend of the Lab, we love Natalia’s deep curiosity and positivity, her dedication to solving wicked problems, and endless generosity with her time and knowledge. Oh – and did we mention she also speaks Mandarin and French?

Ryan Edwards
Deputy Director, Development Policy Centre

Why shouldn’t ODE be returned? Fortunately, the Minister seems to want something better than ODE: a starting point, not something to be disregarded. Most experts and the current Minister lamented its demise, which has also corresponded to a significant decline in technical and economic capacity in and around the aid program. DFAT’s own reporting (p.66) shows that 38 per cent of its completed projects in 2021-22 were rated as unsatisfactory against effectiveness and efficiency criteria. Understanding effectiveness requires impact evaluation. Understanding cost-effectiveness and efficiency requires serious economics and measurement chops. While costs are usually straightforward to calculate, the benefits side of the ledger requires credible estimates of development impacts attributable to aid. 

What should a new, better-than-ODE approach involve? What ODE lacked, or did not focus on enough, is an obvious starting point. First, genuine independence in evaluations and protection from political interference. I am agnostic as to whether this is an in-house government function, which may be more feasible, or a separate, transparent, and public-facing office. What matters most is longevity and informed decision-making. Second, careful measurement – not in M&E matrices, but of development indicators to assess progress and guide programming, based on need (e.g., poverty). Third, a focus on asking the right questions with rigorous impact evaluation – not strictly experimental, but with a commitment to credible counterfactual-based analysis able to inform a serious value-for-money discussion.  

These three basic ingredients may require a major injection of technical expertise but are essential to meet the current government’s aspirations, help us know what works, and do more of that and less of other things.

Formerly at Dartmouth and Stanford, Ryan is now the Deputy Director at the Development Policy Centre here in Canberra. He’s also a Fellow in the Arndt-Corden Department of Economics among many other impressive research endeavours. The Lab loves Ryan’s deep knowledge across the intersection of development and labour economics, international trade, and the environment, and of course his crafty Twitter feed – a must-follow if you’re not already.

Read more