In June 2022, Suresh Venkatasubramanian, Assistant Director at the White House Office of Science and Technology Policy, confirmed that the White House is developing an AI bill of rights that aims to establish guardrails, best practices, and expectations on the role of AI and data in a just society. In the context of this major step, it behooves other sectoral leaders – philanthropists, public sector decision makers, and private sector leaders alike – to consider their role in tackling the challenges and opportunities presented by AI’s prevalence in modern life. 

When ethically designed and deployed, AI makes it possible to address some of our most urgent challenges, from climate change to health inequities. It can strengthen communities, advance equity and justice, and foster unprecedented opportunities. When AI is created and implemented irresponsibly, though, even the most benevolently intentioned projects can backfire, creating new problems and eroding public trust.

To date, most high-profile artificial intelligence has been driven by major corporations; and, driven by capitalist ventures, businesses have largely built AI-driven products, services, and infrastructure that further the interests of their bottom line and their shareholders. We may expect companies to act as a proxy for public interest, but public sector guidance is needed; the public sector has a significant role to play in ensuring accessible and relevant guidance for what responsible AI looks like.  As we look to the future of this transformative technology, we need a new model of inclusive AI design that prioritizes equity, human potential, and social progress; and standards to ensure we use it to build a future we can all share in. 

In partnership with the Patrick J. McGovern Foundation, NationSwell convened fellow leaders and decision-makers embedded in AI development and community-centered innovation to explore, challenge, and advance what public trust in AI looks like: identifying challenges and surfacing solutions needed to demystify AI, empowering communities to participate in its development, and enabling its application it to solve urgent challenges and uplift communities.

Here are some key topics and insights from those convenings, which culminated in an event in Washington, DC in June:

How do we conceptualize public trust?

Clarity around decisionmakers is key. Who controls the building and deploying of AI is important in building trust. Much of the lack of trust in AI comes from a lack of diversity in the communities creating it. Without representation among technologists, AI cannot be truly inclusive, nor will it represent the best interests of communities at large. Building meaningful and accessible pathways to participation (e.g., through research, feedback, community-led design) are important first steps.

First impressions matter. The most publicized ways we hear about AI are often quite negative, and include examples of surveillance or harms from facial recognition. These persistent negative reports – while often true – affect people’s perspective of AI, and diminish the tool’s potential to be reconfigured and/or used for positive social purposes. In order for more people to understand the relevance and  importance of AI in their lives (and therefore be more willing to participate in its interrogation and creation), we must highlight the value AI can bring if used appropriately, as well as pathways to get there.

AI must hold relevancy. Many communities that have the most to lose from unethical uses of AI are focused elsewhere on their pressing human needs: access to food, housing, and education. To increase public engagement, we must build coalitions and create constant touch points with the public to discuss the ways that AI is impacting and influencing their communities and their daily lives, increasing their personal stake and agency in how these decisions are made. 

How do we engage communities?

Avoid jargon to build inclusion. People don’t want to sound uninformed, and often shy away from taking part in the development of AI that ultimately impacts them. We have to create an environment where people feel welcome to the group, and where attention is focused on the problems AI is seeking to solve. In short, people shouldn’t feel they need a PhD or tech degree to provide meaningful input into how AI affects their communities.

Reframe understanding of AI. If we think and talk about AI as a tool or infrastructure, we can encourage communities to recognize all of the factors (including AI) that are shaping their lived experience, which can lead to more planning and design fit for community needs.

The people closest to the problem are closest to the solution. Even before we decide to adopt AI or not to adopt, we have to think of the problem we are trying to solve. Allowing communities to identify the problem may give us different solutions that may or may not use AI and can shape the development of AI that addresses the issue. 

Mind what you’ve built. AI-powered tools are only as strong as their inputs, and once the technology is released, there are no take-backs. It’s critical to ensure the tool’s legitimacy in solving real problems — and not creating new ones.

How do we use AI to develop workforces?

AI education is key. If you aren’t familiar with AI, then it is not going to be a tool in your toolkit. Outside of the tech community, the power of AI is just starting to be understood, especially in nonprofits. We must build capacity and knowledge among people closest to the communities we seek to serve.

A drumbeat to ethics training. In organizations, it’s commonplace to undergo regular cybersecurity training to make sure that every worker is up to speed on threats to digital operations from external bad actors. In the same way, organizations should consider AI ethics training that are both comprehensive and regular occurrences. 

Specificity. For regulation in AI to take hold, we must dive into specific industries or applications — that means we need to build AI literacy among leaders in different industries, applied to the details of their specific worlds.

Explore certification. Certification models are helpful in building transparency and reassurance in different types of industries, such as apparel, SOC data management, B Corps, and LEED. An ethical AI certification model could be a helpful solution to build public trust, particularly from an individual perspective.

What do good policy and regulation look like for AI?

Regulation + augmentation. We need broad regulation to make change happen, similar to how ADA compliance was crucial for inclusivity of workers with disability, but we should also augment and accelerate progress by encouraging standards within industries and businesses.

The power of building a data privacy agency. Beyond the Federal Trade Commission, some in the public are unsure of who to turn to in government when we have a problem with AI, underscoring the need for an AI bill of rights to help Americans navigate a world that is more increasingly powered by artificial intelligence with each passing day. 

As we wait for the Biden Administration to unveil their AI bill of rights, the White House Office of Science and Technology Policy (OSTP) shared in October 2021 what one might look like. Until then, we’re not sitting back and waiting — we’re eager to follow and actively engage as this critical document takes form.


For more actionable insights and credible solutions from AI experts on transparency, ethics, implementation, education and more, read the takeaways from PJMF and NationSwell’s Summit on Building Public Trust in AI.