Why donors don’t “buy and scale” existing projects like investors do and how final evaluations could change that | Associate Writer

By Rhode Charles

Why donors don’t “buy and scale” existing projects like investors do and how final evaluations could change that | Associate Writer

In the private sector, growth rarely starts from scratch. Investors identify what works, understand why it works, and direct capital toward scaling proven models. They expand what performs because it reduces uncertainty.

International development operates differently. Donors frequently fund new projects, even where similar interventions already exist and have shown results. Instead of consolidating what works, the system duplicates efforts, fragments resources, and restarts learning cycles.

Donors do sometimes expand existing initiatives, but this is not the default. Scaling is not systematically embedded in how the system operates.

This is not just inefficient. It limits impact. When successful models are not expanded, their effects remain localized. When learning is not cumulative, progress slows. Resources spread across multiple small initiatives reduce the potential for system-level change.

At the core lies a simple question: if a project has demonstrated credible results, why is scaling not the default next step?

Scaling should reduce risk, accelerate outcomes, and improve cost-effectiveness. It should shift the system from experimentation to consolidation. The issue is not that donors never scale. It is that no system ensures they do.

This gap is rooted in structural incentives. Procurement frameworks favor new competitive processes over continuity. Funding cycles are often short and end just as evidence becomes robust. The system is fragmented, with multiple actors operating in parallel and few mechanisms to consolidate efforts. Attribution also matters. New projects allow clear ownership, while scaling often involves shared credit.

These dynamics push the system toward starting over rather than building forward.

Final evaluations should change this.

At project completion, evaluations synthesize evidence on what worked, how, and at what cost. They provide exactly the type of information that should inform further investment. Yet in practice, they are treated as compliance tools, completed and archived with limited influence on funding decisions.

This creates a disconnect. The system invests in generating evidence but does not systematically use it.

Final evaluations should function as decision tools. They should provide investment-grade evidence to inform whether to scale, replicate, or stop. Instead of marking the end of a project, they should serve as the entry point for expansion.

Repositioning evaluations this way would shift incentives. If future funding depended on evaluation findings, their rigor and credibility would become essential. Evaluation would move from reporting to shaping resource allocation.

The cost of not making this shift is significant.

First, there is a loss of return on investment. Donors fund design, implementation, and evaluation, but when successful projects are not scaled, that investment does not translate into broader impact. Capacity built during implementation often dissipates, requiring systems to be rebuilt later. Duplication persists across contexts.

Ultimately, the greatest cost is impact. Effective interventions remain limited in scope, producing isolated successes rather than widespread change.

Adopting an investor mindset would introduce a different logic. Evidence would guide funding decisions. Projects would be treated as stages, from pilot to scale. Strong performance would trigger follow-on financing, and scaling pathways would be explicit.

This would also reshape how success is defined. Beyond immediate results, projects would be assessed on scalability, cost-effectiveness, and replication potential.

The development sector is strong at generating innovation but weak at translating it into scale. Final evaluations sit at the point where uncertainty has been reduced, and decisions could be made, yet without a structured link to funding, their influence remains limited.

The system already knows a great deal about what works. The challenge is not knowledge. It is choice.

Achieving large-scale change requires linking evidence to investment, aligning incentives toward continuity, and supporting longer-term expansion. Evaluation must be used, not just produced.

Scaling should not be the exception. It should be the system.