In the fall of 2021, NationSwell partnered with the Patrick J. McGovern Foundation (PJMF) to assemble a group of cross-disciplinary thinkers and leaders to kick off a conversation about building public trust in artificial intelligence (AI). The timing of that launch was no coincidence: despite its potential to transform the human experience — affecting the modern workforce, connection within communities, and civic and political participation to name a few — public opinion on AI remains mixed, ranging from boundless optimism about the possibility unlocked by AI, to deep distrust based on evidence and a slew of headlines focused on instances where AI has — both maliciously and inadvertently — been used to target and cause harm to historically marginalized communities.

“We must remember that in order to show the full promise of AI and AI as a positive tool, we must unite together to make important shared decisions about how this technology will be created, used, and regulated,” says Patrick McGovern, current Chair of PJMF’s Board of Trustees and a longtime technology executive. “At the core of our optimism is a belief that trust in AI and in each other must be built and earned.”

At a summit convened in Washington, D.C. in June 2022, NationSwell and PJMF once again teamed up with an influential group of cross-sector leaders to discuss what it will take to earn the public’s trust in AI. During the event, panelists dove deep into what accountability, transparency, nondiscrimination, data protection, and justice will look like for the future of AI, and also discussed what it means to build credibility during a time when it is notoriously difficult to cultivate public trust.

A formula for trust

Clarence Wardell, keynote speaker and Chief Data and Equity Officer for the American Rescue Plan (ARP) implementation team at the White House, works every day to bridge the gap between the Biden administration’s policy development and the technical data and human centered design tools that will allow us to more effectively deliver policy outcomes for the American people.

Wardell compared building public trust in AI to the critical failures of American police officers in the high-profile deaths of several Black men in recent years, including Michael Brown in 2014 and George Floyd in 2020.

“A system, an institution that is designed to protect and serve, to keep you and I safe, had failed to deliver that safety for a certain segment of this population,” Wardell said. “I don’t think it’s any different than building trust or public trust in AI. Core to building public trust is delivering outcomes in line with the public’s expectations of what these tools can do, but at the very least doing no harm.”

Part of the work, then, becomes using data reliably and consistently in order to prove over time that not only do the technology systems we’re building not pose an active threat to marginalized communities, they actively stand to make community members’ lives better.

Wardell said that by using data to qualitatively evaluate the performance of AI systems, the Biden administration has already taken steps to address and prevent racial and ethnic algorithmic bias in home valuations, and has also implemented guardrails as well as called for further studies on using facial recognition technology, other biometric technology tools and predictive algorithms in policing and other criminal justice fields. 

“The key here is to be able to show these things hold over time, not just at an aggregate level, but at a level that’s specific and personal to individuals and communities,” Wardell said, “particularly those that have been harmed by other tools, technologies, institutions, and systems in the past.”

Confronting bias and trauma

Dr. Kirk Borne, Chief Science Officer at DataPrime, highlighted the importance of acknowledging the limits of our own bias, and how we can push past that with data.

“The truth lives in a higher dimensional space than our limited perspective,” Borne shared. “And in statistics, that’s called bias. That’s a mathematical statement in statistics, which is: there’s more structure there than your data is allowing for. There’s more information than what  you’re taking into account when you’re making your decision. Diversity of perspective and diversity in statistics, they both mean the same thing: breaking the bias wherein we’re limited by our own perspective. Start collecting more data, and we can start resolving those errors… because we’re adding the different perspectives that all those additional data sources give us.”

Nicol Turner Lee of the Brookings Institute elaborated that while tackling inherent biases will be critical, determining how those biases show up in systems will still be difficult — particularly when working with “traumatic data” on wealth, systemic inequalities, criminal justice, and policing, which historically has greater implications for certain groups.

“There’s always going to be differential treatment: ‘I like tan jackets, treat me differently, show me every tan jacket that you sell,’” Lee said. “But when we start talking about the coordination of how you figure out through the inferential economy that I not only like tan jackets, but I also spend a lot of time buying stuff for my Black daughter or I spend time looking at my bank account… the inferences that come with AI, that data availability lends itself to traumatic circumstances for certain populations.”

Data and AI ethicist Renée Cummings underscored Lee’s point about the importance of understanding the historical underpinnings of algorithmic bias.

“One of the things that I always think about is, ‘how do we code what it means to be Black and Brown?’” Cummings said. “Those data points have always been data points related to risk, data points related to danger, data points related to liability, data points related to what is untrustworthy or what is unworthy. So when we think about those credit scores, and when we think about those unfair sentences, and when we think about an algorithm being able to deny you parole or an opportunity to get a home or loan, then we’ve got to think about why people are not trusting of this technology.”

Bridging the gap between makers and users

Chris Kuang, co-founder of the US Digital Corps, expanded upon the transparency issues in AI and neural networks, specifically those that exist with “black box algorithms,” which are designed by people who are “…so far removed from the end impact of who that system is going to impact.”

According to Kuang, building public trust will be contingent upon bridging those gaps in order to ensure that the people who will be most affected by systems are the same people who are building the systems:  “…[It’s] not just your AI subject matter experts, but your program area folks, if it is an economic system that we’re determining credit scores or whatever it might be,” Kuang said. “Bringing all those people together, but fundamentally with the people at the end of the day, who are being touched by those systems.”

In building such feedback pipelines in order to ensure that the voices of users are heard, Lee underscored her work with the Energy Star rating model, a “ way to integrate consumer feedback into the design and models,” so that people have an avenue to inform creators about what they’re getting wrong.

Being transparent from start to finish

Just as there should be more horizontal communication between AI creators and the communities they affect, Kuang said there should also be more public transparency throughout the creation process — including when and where decisions are being made by algorithms.

“It’s not at the end when all of a sudden you decide to be transparent, it’s transparent in your aims, transparent in the trade offs,” Kuang said. “I think anyone who is building these models has that responsibility, whether they’re here in the public sector, they’re in the private sector or somewhere in between. I think there is transparency that should come when it comes to the people that are consulted. It’s not just the data scientists and the AI experts, it’s people in the community.”

Putting protections in place

Lee also spoke about the need for increased governmental oversight and regulation in order to bring AI to heel in the absence of strict enforcement from the private sector. Such legislation — like the Algorithmic Accountability Act, which would essentially require companies to issue risk assessments for their algorithms — will be instrumental in safeguarding against widespread discrimination. 

“I want models to not amplify the same type of racism and discrimination and gender bias that we’re seeing show up on these platforms ” Lee said. She emphasized that lawmakers will need to be more nimble in order to be relevant in this space.

In a conversation with Hilke Schellmann, Assistant Journalism Professor at NYU, Suresh Venkatasubramanian, currently the Assistant Director for Science and Justice in the Science and Society Division of the White House’s Office of Science and Technology Policy, affirmed the need for more guardrails for automated technologies; but also went a step further, calling for the public to be involved “…at every stage in the development and the assessment in the evaluation of technologies, [in order to] make sure all stakeholders are involved.” According to Venkatasubramanian, this could look like a technology “bill of rights” — a set of principles that govern the way AI technology is created and implemented.

“This is not a legislative proposal or a legal proposal, it’s a set of aspirational principles along with very detailed guidelines that would help developers, that would help people who want to build these systems with these protections in place to do what needs to be done,” he said. Venkatasubramanian confirmed that a first version of such a “bill of rights” will be issued by his team in the near future.

Opening up creates opportunity 

Finally, Vilas Dhar, president of the Patrick J. McGovern Foundation, spoke about fairness and representation as the cornerstones of any conversation about building public trust in AI. Dhar suggested that, while the common wisdom generally holds that the wrong people are in the seats of power making decisions that govern how technology gets made, maybe the truth is that there simply aren’t enough people in the rooms where those decisions get made.

“Where are the representatives that speak to social and civic organizations that represent labor, that represent employers, that represent institutions that have been far away from a technological revolution and yet are being transformed by it?” Dhar asked. 

“I’ll suggest that there’s an opportunity that then comes out of this,” he said. “That even as we’ve heard the incredible risks vulnerable populations face when they aren’t a part of making decisions about the creation of new technologies, about their implementation, and maybe most important about their ongoing review, what happens when we’re able to bring them into the conversation?”


Learn more about the Patrick J. McGovern Foundation, a NationSwell Institutional Member, here.