The early adopters of AI are not necessarily the most vulnerable, nor the most strategic; they’re simply the least constrained. But as they move quickly to make and shape the way the world uses AI, I worry that development practitioners, donors and supporters are being left behind.
When I talk to international development colleagues around the MS Teams water cooler, I meet four broad mindsets. Each is valid. Each drives how individuals and organisations act in response to AI– as a threat, a tool or a distraction [comment: em-dashes are my own]. And ultimately, each mindset captures something true about who we are as a development community.
Here are four of the mindsets shaping the moment:
The optimists
The optimists emphasise the transformative potential of AI to improve development cooperation, from scaling service delivery to widening access to knowledge. They push for experimentation and pilot projects, chasing efficiency and effectiveness gains. This is the group of colleagues who are most likely to achieve an AI breakthrough for international development. However, optimists often risk an over-reliance on technological fixes without fully grappling with the political economies they operate in. The classic development blind spot: engaging with tools more easily than with power.
The cautious adopters
The cautious adopters are open to both benefits and risks of AI. They’re curious about potential, often using AI for low-risk tasks like drafting, summarising, or cleaning data. Adoption grows incrementally as clearer guidance emerges. This instinct for gradualism helps them feel safe, but also keeps them slow. Many confine AI use to personal contexts because of compliance rules or internal bans. In a process-heavy culture, these colleagues are the realists. And they know the system moves only as quickly as donor coordination meetings produce results.
The risk managers
The risk managers see AI as potentially useful but requiring strong guardrails before wide adoption. Their focus is on governance frameworks, cyber protection, and oversight of partner activity. They embody one of international development’s noblest instincts: protecting trust. But their collective fear of reputational damage can make risk management an end in itself. With short project cycles and tight accountability, they sometimes forget that avoiding failure isn’t the same as achieving impact. Still, these colleagues are the reason organisations avoid repeating old mistakes on a larger, faster scale.
The sceptics
The AI sceptics and abstainers either value inclusion and community-based approaches, questioning whether AI aligns with these principles at all, or are simply sceptical of technology writ large. They highlight bias, environmental cost, and the risk of deepening inequality or external control. They’re the conscience of our field. Without them, we’d rush headlong into every shiny tool. Yet scepticism can harden into distance. If the most thoughtful voices disengage, the space fills with those least inclined to ask ethical questions.
These mindsets don’t reflect technical literacy (something I wouldn’t claim having much of myself) but a particular worldview and institutional culture.
Each represents a different instinct about change: whether to accelerate it, resist it, manage it, or channel it.
Together they form the ecosystem of development thinking, reflecting our community’s strengths in ethics, collaboration, learning and a desire to improve the world – as well as our pitfalls in short-termism, process overload, missing the politics, slow adaptation and risk aversion.