In the fall of 2021, the Patrick J. McGovern Foundation and NationSwell convened a collaborative of cross-sector thinkers and leaders for critical conversations on building public trust in Artificial Intelligence (AI) and encouraging public participation in AI design, development, and deployment. The conversations centered on how public trust in AI intersects with Community Engagement, Workforce Development,  Policy & Regulation, and Ethics & Rights, which led us to convene public policymakers and leading AI experts to discuss one of society’s most urgent questions:

How can we build public trust in AI?

On June 9, 2022, the collaborative will culminate in a unique event where audience members will hear from Vilas Dhar, Clarence Wardell, Cristiano Lima, Nicol Turner Lee, Renée Cummings, Kirk Borne, Chris Kuang, Hilke Schellmann, and Suresh Venkatasubramanian, and more in a live conversation to go deeper on what it will take to create public trust in AI, covering topics outlined below and more.

Demystifying AI was posited by the collaborative as a prerequisite to engender a stronger and more comprehensive understanding of AI, and for building a foundation of public confidence in the legitimacy of AI to deliver sustainable, future-ready, solutions to some of society’s greatest challenges. Putting AI in context and empowering people to understand AI and its impact on their lives is critical to embedding equity into how AI works, and to making public engagement possible in key areas of influence, including workforce development, service delivery and community resilience.

The promise and peril of AI and how to deploy AI to better serve public interest was explored against the backdrop of a deeply imbalanced AI ecosystem that often reinforces preexisting disparities and persistent racial injustices. Prioritizing inclusivity to secure the diverse talent pool required to drive community-driven AI projects, community-inspired AI research, and equitable algorithmic solutions was offered as an effective, and essential, solution to some of AI’s many challenges that continue to slow the maturity of the technology. Diversity in AI was identified as necessary not just to deliver AI equity to high-needs and underserved communities, but also to strengthen US innovation outcomes.

Participants explored the disproportionate power currently held in this space by the corporate sector and its leaders, and the need for solutions that infiltrate businesses from the inside (such as ongoing ethics training and education for technologists and consumers) and the outside (like governance systems, industry standards and other protection mechanisms installed to mitigate the harms.) Conversations also covered how the social, public and private sectors need to collaborate as custodians of the common good to design AI policy solutions and products to empower and uplift all through reimagining public policy and governance, offering a critical rethinking of political participation, social change, and civic engagement in the age of AI.

The collaborative posited that more rigor in the launch of AI solutions was an essential building block for building trust, the need to balance the urge for speed with the desire to do no harm, and for a system that demands accountability for harm inflicted and causality for problems solved. 

Participants also discussed how to ensure AI is culturally responsive and relevant, and how to use advocacy to include those who have been historically excluded and denied access to the design and development of AI-driven decision-making systems and left out of the future of AI — stressing the importance of inviting communities to define the problem around which AI solutions are built, because the tool is only as strong as its ability to legitimately solve those problems and there should be “no application without representation.”

Not just preoccupied with preventing harm, the sessions also focused on the need to counter knee-jerk negativity and shed light on a positive narrative for AI Exploring how AI could be used to correct inequities and injustices, call AI powerbrokers to account, demand transparency, and ensure the future of AI is justice-oriented and trauma-informed. Participants even hypothesized that it may be time to use language that is less lofty and more rooted in what we can achieve — shifting from ‘AI ethics’ to aiming for ‘responsible and accountable AI.’

These themes will form the backbone of the unique event on June 9th 2022, at Wild Days at the Eaton Hotel in Washington D.C., bringing public policymakers and leading AI experts together to discuss one of society’s most urgent questions: What will it take to build public trust in AI?