As we step into 2025, AI technologies are primed to drive even greater innovation around societal challenges, from fostering inclusive growth to expanding educational pathways and beyond. But AI is also going to continue raising important ethical questions while carrying the potential to drive new inequities.
At NationSwell’s recent roundtable discussion, What’s Ahead in Social Impact and AI, leaders and innovators across sectors joined featured panelists Vilas Dhar of The Patrick J. McGovern Foundation, Nathan Froelich of Blackbaud, and Stephen Plank of The Annie E. Casey Foundation to share strategies on how AI is currently being leveraged to meet societal challenges and surface ethical considerations and best practices for responsible AI implementation moving forward.
Here are some of the key takeaways from the event:
Insights:
Philanthropic funders have a key role to play in ensuring nonprofit partners get the AI tools they need at scale. New technologies have the potential to serve vulnerable communities, including by organizing decades of longitudinal research and creating predictive engines that can improve community wellbeing. But given the corporate power dynamics surrounding how tech is built and deployed, we need philanthropies and companies to step forward and advocate for the technology solutions their partners need on the ground, in order for them to be created at scale. Funders have a unique opportunity to come together to build shared capacity, new institutions, and resources in order to ensure that future investments in AI go toward honing its potential to create new pathways to dignity and justice in the world.
A good intelligence strategy will require us to be extremely intentional about governance. One of the most pressing challenges posed by AI will be how we can leverage and deploy it in a way that doesn’t harm people and the planet. We need to set up effective systems of governance, paying attention to how we’re deploying generative AI both within our own organizations and in the marketplace. The development of a set of guiding principles will be instrumental in determining which technologies your organization ultimately adopts, ensuring that the tools you’re using meet your ethical standards.
The creation of empowerment councils can help you tap into the most salient use cases for AI. Convening grantees and employees and giving them the access and latitude to experiment with AI can be one way to fuel unfettered iteration and innovation. Providing the tools and encouraging experimentation and exploration can help to surface the most salient examples of how they’re using new technologies to be more productive and support goals effectively, which can ultimately be helpful in deciding when and how to scale solutions appropriately.
Private-public partnerships hold great potential in shaping AI decisions and adoption. Engaging directly with tech funders through roundtable discussions can help to surface innovative ways to leverage private sector partnerships for tool licensing and technical assistance. Similarly, building peer learning communities where government leaders can access AI expertise and collectively develop approaches to service delivery and technology procurement can be powerful ways to shape policy decisions.
AI’s potential to displace or disrupt jobs depends on which workforce you’re talking about. While there is good research to suggest that corporate leaders do not expect AI to contribute to significant disruption in white collar jobs, those outside of traditional 9-5 roles still face challenges to upskilling, and in many cases AI is being developed with goals that run counter to the interests and livelihoods of low income and nontraditional workers. At the same time, new technologies also hold the potential to help workers maintain and build power by facilitating organization among union members, helping workers to file wage theft claims, visualize data, and influencing state policy decisions. Let’s explore that potential.