AI is moving fast, but grantmakers are rightly cautious. Funders are under pressure to move money more efficiently, learn faster, and support grantees better, all without adding risk, burden, or opacity to an already complex system. The question is no longer whether AI will touch grantmaking, but where it can actually add value—and where it shouldn’t.

On April 16, NationSwell invited philanthropic and impact leaders to take part in a conversation on the practical use of AI in grantmaking. The conversation featured ideas about when AI can meaningfully improve decisions and workflows and how to adopt it in ways that strengthen, rather than undermine, equity, accountability, and relationships with grantees. Some of the most salient takeaways from the discussion appear below:


Key takeaways:

Assess where AI meaningfully adds value across the grantmaking process. Rather than applying AI indiscriminately, organizations should take a step back and evaluate workflows end-to-end to determine where these tools can be most effective. A thoughtful, system-level approach can promote AI application in ways that enhance, rather than complicate, existing processes.

Use AI to streamline manual and error-prone grantmaking workflows. Financial due diligence can be a highly manual, time-intensive, and error-prone process, often involving spreadsheet-based analysis or visual review of financial statements. AI tools like Grant Guardian were developed to improve accuracy and efficiency in this specific workflow. 

Reinvest time savings from AI into deeper grantee engagement. Small grantmaking teams often face hundreds of applications, creating capacity constraints. AI can be used to support summarization, rubric-based pre-review, and prioritization to help manage this volume. The reduction in processing time, from hours to minutes, can allow staff to spend more time having meaningful conversations with grantees and improving the quality of their work. 

Recognize and normalize AI use among applicants and grantees. There is growing recognition that applicants and grantees are using AI to improve efficiency, particularly in drafting and responding to applications. When used thoughtfully, this can help reduce administrative burden, though differentiation still relies on the substance of proposals and outcomes.

Consider supporting grantees’ capacity to adopt AI tools and infrastructure. As AI becomes more embedded in workflows, there is an opportunity for funders to think about how grantees can access and use these tools effectively. Supporting this capacity, particularly through flexible, operational funding, can help organizations integrate AI in ways that enhance their work, rather than treating it as a one-off programmatic expense.

Develop and deploy AI systems with responsible AI principles. Specific principles should guide all AI adoption work in grantmaking, including safety and transparency, community-centered design, bias mitigation, human-in-the-loop validation, enterprise-grade security, and sustainability considerations. Start AI adoption through structured experimentation with clear guardrails, and consider empowering early adopters to test tools within defined parameters (e.g., “stoplight” approaches to acceptable use). These frameworks can also support clearer communication and transparency about how AI is being used.

Consider AI disclosure as contextual and relational: Whether and how to disclose AI use in grantmaking processes depends on organizational policies and levels of AI involvement. While practices may vary between organizations, especially as technology grows and with wider experimentation, keep a relational and trust-based mindset.

Maintain human oversight as a core requirement in AI-assisted workflows. AI is never a substitute for human judgment, and validation and verification by users must be built into the process. Being explicit about this, both internally and externally, can help reinforce trust, particularly in a field like philanthropy that is deeply relationship-driven and values human expertise.

Design for customization of AI tools to reflect different evaluation contexts. Grantmaking organizations assess financial health and programmatic fit differently, and AI tools can be configured with varying metrics, thresholds, and profiles to match those needs. This flexibility can also support more context-sensitive and equitable evaluation approaches; for example, assessing early-stage organizations differently than more established ones.