As artificial intelligence reshapes how institutions operate, many nonprofits and public-sector leaders are grappling with a pressing question: How can AI be deployed responsibly and equitably in service of the public good?
At IBM, that question isn’t theoretical — it’s central to how the company designs, governs, and advances its AI strategy across sectors. In a new resource developed in collaboration with NationSwell, Responsible Use of AI for Social Impact, IBM outlines a practical roadmap for responsible AI adoption that moves beyond high-level principles and into actionable guidance for organizations navigating capacity constraints, ethical considerations, and rapidly evolving technology. The report emphasizes AI literacy; governance as an enabler instead of a blocker; and a clear focus on augmenting, rather than replacing, human capability.
For this installment of Five Minutes with…., NationSwell spoke with Sara Link — IBM’s Global Head of Employee Impact — about what it takes to operationalize trustworthy AI at scale and why government and social sector leaders must be equipped not just with tools, but with the systems and confidence to use them well.
We asked Sara how IBM is reframing responsible AI from a compliance exercise into a performance advantage, what meaningful AI literacy actually looks like inside an organization, and what wild success for ethical AI adoption could look like five years from now.
Here’s what she had to say:
NationSwell: What do you see as most distinctive about IBM’s approach to responsible AI, particularly for nonprofits and social impact organizations that face capacity constraints?
Sara Link, Global Head of Impact at IBM: It’s encouraging to see so many responsible AI principles circulating right now; that level of focus and intentionality is important. At IBM, our approach centers on making AI practical, understandable, and genuinely useful in everyday work. Our belief is that AI should help people do their jobs better — not replace them, overwhelm them, or create confusion.
One of the key insights in the report is that responsible AI has to be realistic for organizations with limited time, staff, and capacity. Nonprofits don’t have extra resources or margin for error, and in many cases they don’t have deep technical expertise in-house. So responsible AI can’t just live in a policy document — it has to be built in a way that reflects those constraints. That means designing tools and governance structures that are usable, accessible, and practical from the start, so organizations can adopt them confidently and integrate them into their daily work.
NationSwell: Augmenting rather than replacing human capability is central to IBM’s view of AI. Can you share an example of what that looks like in practice, either at IBM or with partners?
Link, IBM: In practice, we think about AI as something that helps bring work to life — whether that’s surfacing information, spotting patterns, or saving time on repetitive tasks. But at the end of the day, people still make the final decisions, especially when judgment, fairness, or context matter.
At IBM, for example, internal tools like AskHR or AskCSR help employees find answers more quickly and efficiently. They streamline the process, but they don’t replace accountability. People are still responsible for what happens next. The goal is to enable better, more informed decisions — not to obscure or complicate them.
NationSwell: The report emphasizes foundational AI literacy. What does “good” AI literacy look like inside an organization, and how does that translate into better outcomes?
Link, IBM: Good AI literacy means people aren’t afraid of the tools, but they also don’t blindly trust them. It shows up when leaders and staff understand what AI can support and where human judgment still needs to step in.
You can hear it in the kinds of questions people feel comfortable asking: Does this actually make sense? Should we double-check this before acting on it? For example, in a nonprofit using AI to screen applications or triage services, literacy shows up when staff know how to review AI recommendations, recognize when something doesn’t feel right, and understand that the final decision rests with them.
That kind of literacy leads to better mission outcomes. It reduces errors, helps guard against bias, and builds trust with the communities being served rather than simply automating decisions without oversight.
NationSwell: How does the report reframe responsible AI governance as an enabler rather than a blocker? What is one practical first step an organization can take?
Link, IBM: When you lay out clear rules, it actually becomes easier to move forward. Clarity helps people understand what’s acceptable and what’s not. Without that clarity, uncertainty can cause hesitation or lead organizations to avoid using AI altogether. One of the strongest findings in the report is that governance doesn’t slow adoption; it accelerates it by removing ambiguity.
A practical first step is to build a simple pause point into an existing workflow — a moment where a human reviews and signs off before an AI-driven decision affects someone. It doesn’t have to be complicated. It can be as straightforward as asking: Does this outcome make sense? Would I be comfortable explaining this decision to the person it impacts?
Over time, those small, repeatable checks turn responsible AI from a written policy into a daily habit. And that’s what enables organizations to scale AI safely and confidently.
NationSwell: If you could change one thing about how funders currently approach AI in the social sector, what would it be?
Link, IBM: First, it’s critical for funders to recognize the importance of investing in organizational capacity; that’s the foundation. I would encourage funders to focus not just on funding AI tools, but on supporting people’s ability to use AI well over time.
Investing in technology alone doesn’t create impact if organizations aren’t prepared to work with it. Right now, many nonprofits are expected to figure this out on their own. They may receive funding to pilot AI, but not necessarily the support for training, governance, or long-term learning that makes those tools effective and safe.
Through IBM’s AI for Impact program, which we launched in late 2024, we’ve brought nonprofits together to share how they’re using AI, what questions they have, and where they see opportunity. A recurring theme has been the need for funding that supports both the right tools and the training required to use them responsibly. And research from the IBM Institute for Business Value shows that skills are evolving rapidly — 57% of executives surveyed expect today’s skills to become outdated by 2030. That pressure is even more acute in the social sector, where resources are already stretched.
The funders making the biggest difference are supporting AI readiness, not just adoption — investing in training, shared standards, and giving teams time to learn and adapt, not just deliver. I’d also encourage funders to make their grantees aware of programs like AI for Impact. Many of these resources are free and can help organizations and their leaders build the knowledge and confidence they need to prepare for what’s ahead.
NationSwell: If responsible AI adoption truly takes root, what might wild success look like for the sector five years from now?
Link, IBM: The vision of success, to me, is that AI makes work easier and fairer — not more stressful or confusing. If we can eliminate that sense of overwhelm and instead empower people to use their skills more fully, that would be a meaningful outcome.
In that future, people would understand the tools they’re using and feel confident explaining the decisions those tools inform. AI would help nonprofits do more good without eroding trust or weakening human connection. Most importantly, technology would support organizations in serving communities better — not get in the way.
That’s what wild success looks like: better outcomes for communities, more efficient pathways to get there, and trust and connection preserved throughout the process.
NationSwell: What have you personally learned or found inspiring as you’ve helped lead this work around AI? How has this journey informed your broader leadership in the corporate impact space?
Link, IBM: For a long time, I’ve focused on capacity building for nonprofits and on how the corporate sector and funders can partner more closely with them, providing the right level of support so they can better serve their communities.
What’s been most inspiring lately is the openness I’ve seen when nonprofits come together — the willingness to share ideas, build relationships, and solve challenges collaboratively. There’s a real energy in the room when leaders from across sectors are learning from one another and exploring what’s possible.
I saw that firsthand at a recent conference after speaking on this topic: A healthcare employee approached me and shared that she and her colleagues had been experimenting with AI tools to solve internal challenges, and they were eager to bring leadership into the conversation to explore the potential more formally. She ended up connecting with another healthcare system that was further along, helping to broker a conversation between them.
That kind of openness — being curious about what’s out there and willing to imagine what could be possible — is what excites me most. It’s that spirit of shared learning and forward momentum that will ultimately drive meaningful change.
NationSwell: Is there anything else from the report — or from your leadership perspective — that you’d like to share?
Link, IBM: As someone who doesn’t necessarily have an engineering or a technical background, what’s been especially inspiring to me is realizing that you don’t need deep technical expertise to ask the right questions or to begin this journey of continuous learning. You don’t have to be an engineer to engage meaningfully with AI.
Personally, this experience has shown me how much further we can take our work by building our skills, staying curious, and asking thoughtful questions. When we approach AI as a tool for strengthening connections and building stronger partnerships — rather than something intimidating or purely technical — it becomes incredibly energizing. That mindset has been one of the most exciting parts of this journey for me.
"