Creating guardrails and confronting trauma: What it will take to build public trust in AI

In the fall of 2021, NationSwell partnered with the Patrick J. McGovern Foundation (PJMF) to assemble a group of cross-disciplinary thinkers and leaders to kick off a conversation about building public trust in artificial intelligence (AI). The timing of that launch was no coincidence: despite its potential to transform the human experience — affecting the modern workforce, connection within communities, and civic and political participation to name a few — public opinion on AI remains mixed, ranging from boundless optimism about the possibility unlocked by AI, to deep distrust based on evidence and a slew of headlines focused on instances where AI has — both maliciously and inadvertently — been used to target and cause harm to historically marginalized communities.

“We must remember that in order to show the full promise of AI and AI as a positive tool, we must unite together to make important shared decisions about how this technology will be created, used, and regulated,” says Patrick McGovern, current Chair of PJMF’s Board of Trustees and a longtime technology executive. “At the core of our optimism is a belief that trust in AI and in each other must be built and earned.”

At a summit convened in Washington, D.C. in June 2022, NationSwell and PJMF once again teamed up with an influential group of cross-sector leaders to discuss what it will take to earn the public’s trust in AI. During the event, panelists dove deep into what accountability, transparency, nondiscrimination, data protection, and justice will look like for the future of AI, and also discussed what it means to build credibility during a time when it is notoriously difficult to cultivate public trust.

A formula for trust

Clarence Wardell, keynote speaker and Chief Data and Equity Officer for the American Rescue Plan (ARP) implementation team at the White House, works every day to bridge the gap between the Biden administration’s policy development and the technical data and human centered design tools that will allow us to more effectively deliver policy outcomes for the American people.

Wardell compared building public trust in AI to the critical failures of American police officers in the high-profile deaths of several Black men in recent years, including Michael Brown in 2014 and George Floyd in 2020.

“A system, an institution that is designed to protect and serve, to keep you and I safe, had failed to deliver that safety for a certain segment of this population,” Wardell said. “I don’t think it’s any different than building trust or public trust in AI. Core to building public trust is delivering outcomes in line with the public’s expectations of what these tools can do, but at the very least doing no harm.”

Part of the work, then, becomes using data reliably and consistently in order to prove over time that not only do the technology systems we’re building not pose an active threat to marginalized communities, they actively stand to make community members’ lives better.

Wardell said that by using data to qualitatively evaluate the performance of AI systems, the Biden administration has already taken steps to address and prevent racial and ethnic algorithmic bias in home valuations, and has also implemented guardrails as well as called for further studies on using facial recognition technology, other biometric technology tools and predictive algorithms in policing and other criminal justice fields. 

“The key here is to be able to show these things hold over time, not just at an aggregate level, but at a level that’s specific and personal to individuals and communities,” Wardell said, “particularly those that have been harmed by other tools, technologies, institutions, and systems in the past.”

Confronting bias and trauma

Dr. Kirk Borne, Chief Science Officer at DataPrime, highlighted the importance of acknowledging the limits of our own bias, and how we can push past that with data.

“The truth lives in a higher dimensional space than our limited perspective,” Borne shared. “And in statistics, that’s called bias. That’s a mathematical statement in statistics, which is: there’s more structure there than your data is allowing for. There’s more information than what  you’re taking into account when you’re making your decision. Diversity of perspective and diversity in statistics, they both mean the same thing: breaking the bias wherein we’re limited by our own perspective. Start collecting more data, and we can start resolving those errors… because we’re adding the different perspectives that all those additional data sources give us.”

Nicol Turner Lee of the Brookings Institute elaborated that while tackling inherent biases will be critical, determining how those biases show up in systems will still be difficult — particularly when working with “traumatic data” on wealth, systemic inequalities, criminal justice, and policing, which historically has greater implications for certain groups.

“There’s always going to be differential treatment: ‘I like tan jackets, treat me differently, show me every tan jacket that you sell,’” Lee said. “But when we start talking about the coordination of how you figure out through the inferential economy that I not only like tan jackets, but I also spend a lot of time buying stuff for my Black daughter or I spend time looking at my bank account… the inferences that come with AI, that data availability lends itself to traumatic circumstances for certain populations.”

Data and AI ethicist Renée Cummings underscored Lee’s point about the importance of understanding the historical underpinnings of algorithmic bias.

“One of the things that I always think about is, ‘how do we code what it means to be Black and Brown?’” Cummings said. “Those data points have always been data points related to risk, data points related to danger, data points related to liability, data points related to what is untrustworthy or what is unworthy. So when we think about those credit scores, and when we think about those unfair sentences, and when we think about an algorithm being able to deny you parole or an opportunity to get a home or loan, then we’ve got to think about why people are not trusting of this technology.”

Bridging the gap between makers and users

Chris Kuang, co-founder of the US Digital Corps, expanded upon the transparency issues in AI and neural networks, specifically those that exist with “black box algorithms,” which are designed by people who are “…so far removed from the end impact of who that system is going to impact.”

According to Kuang, building public trust will be contingent upon bridging those gaps in order to ensure that the people who will be most affected by systems are the same people who are building the systems:  “…[It’s] not just your AI subject matter experts, but your program area folks, if it is an economic system that we’re determining credit scores or whatever it might be,” Kuang said. “Bringing all those people together, but fundamentally with the people at the end of the day, who are being touched by those systems.”

In building such feedback pipelines in order to ensure that the voices of users are heard, Lee underscored her work with the Energy Star rating model, a “ way to integrate consumer feedback into the design and models,” so that people have an avenue to inform creators about what they’re getting wrong.

Being transparent from start to finish

Just as there should be more horizontal communication between AI creators and the communities they affect, Kuang said there should also be more public transparency throughout the creation process — including when and where decisions are being made by algorithms.

“It’s not at the end when all of a sudden you decide to be transparent, it’s transparent in your aims, transparent in the trade offs,” Kuang said. “I think anyone who is building these models has that responsibility, whether they’re here in the public sector, they’re in the private sector or somewhere in between. I think there is transparency that should come when it comes to the people that are consulted. It’s not just the data scientists and the AI experts, it’s people in the community.”

Putting protections in place

Lee also spoke about the need for increased governmental oversight and regulation in order to bring AI to heel in the absence of strict enforcement from the private sector. Such legislation — like the Algorithmic Accountability Act, which would essentially require companies to issue risk assessments for their algorithms — will be instrumental in safeguarding against widespread discrimination. 

“I want models to not amplify the same type of racism and discrimination and gender bias that we’re seeing show up on these platforms ” Lee said. She emphasized that lawmakers will need to be more nimble in order to be relevant in this space.

In a conversation with Hilke Schellmann, Assistant Journalism Professor at NYU, Suresh Venkatasubramanian, currently the Assistant Director for Science and Justice in the Science and Society Division of the White House’s Office of Science and Technology Policy, affirmed the need for more guardrails for automated technologies; but also went a step further, calling for the public to be involved “…at every stage in the development and the assessment in the evaluation of technologies, [in order to] make sure all stakeholders are involved.” According to Venkatasubramanian, this could look like a technology “bill of rights” — a set of principles that govern the way AI technology is created and implemented.

“This is not a legislative proposal or a legal proposal, it’s a set of aspirational principles along with very detailed guidelines that would help developers, that would help people who want to build these systems with these protections in place to do what needs to be done,” he said. Venkatasubramanian confirmed that a first version of such a “bill of rights” will be issued by his team in the near future.

Opening up creates opportunity 

Finally, Vilas Dhar, president of the Patrick J. McGovern Foundation, spoke about fairness and representation as the cornerstones of any conversation about building public trust in AI. Dhar suggested that, while the common wisdom generally holds that the wrong people are in the seats of power making decisions that govern how technology gets made, maybe the truth is that there simply aren’t enough people in the rooms where those decisions get made.

“Where are the representatives that speak to social and civic organizations that represent labor, that represent employers, that represent institutions that have been far away from a technological revolution and yet are being transformed by it?” Dhar asked. 

“I’ll suggest that there’s an opportunity that then comes out of this,” he said. “That even as we’ve heard the incredible risks vulnerable populations face when they aren’t a part of making decisions about the creation of new technologies, about their implementation, and maybe most important about their ongoing review, what happens when we’re able to bring them into the conversation?”


Learn more about the Patrick J. McGovern Foundation, a NationSwell Institutional Member, here.

Electronic Press Kit: Building Public Trust in AI Summit

In the fall of 2021, the Patrick J. McGovern Foundation and NationSwell convened a collaborative of cross-sector thinkers and leaders for critical conversations on building public trust in Artificial Intelligence (AI) and encouraging public participation in AI design, development, and deployment. The conversations centered on how public trust in AI intersects with Community Engagement, Workforce Development,  Policy & Regulation, and Ethics & Rights, which led us to convene public policymakers and leading AI experts to discuss one of society’s most urgent questions:

How can we build public trust in AI?

On June 9, 2022, the collaborative will culminate in a unique event where audience members will hear from Vilas Dhar, Clarence Wardell, Cristiano Lima, Nicol Turner Lee, Renée Cummings, Kirk Borne, Chris Kuang, Hilke Schellmann, and Suresh Venkatasubramanian, and more in a live conversation to go deeper on what it will take to create public trust in AI, covering topics outlined below and more.

Demystifying AI was posited by the collaborative as a prerequisite to engender a stronger and more comprehensive understanding of AI, and for building a foundation of public confidence in the legitimacy of AI to deliver sustainable, future-ready, solutions to some of society’s greatest challenges. Putting AI in context and empowering people to understand AI and its impact on their lives is critical to embedding equity into how AI works, and to making public engagement possible in key areas of influence, including workforce development, service delivery and community resilience.

The promise and peril of AI and how to deploy AI to better serve public interest was explored against the backdrop of a deeply imbalanced AI ecosystem that often reinforces preexisting disparities and persistent racial injustices. Prioritizing inclusivity to secure the diverse talent pool required to drive community-driven AI projects, community-inspired AI research, and equitable algorithmic solutions was offered as an effective, and essential, solution to some of AI’s many challenges that continue to slow the maturity of the technology. Diversity in AI was identified as necessary not just to deliver AI equity to high-needs and underserved communities, but also to strengthen US innovation outcomes.

Participants explored the disproportionate power currently held in this space by the corporate sector and its leaders, and the need for solutions that infiltrate businesses from the inside (such as ongoing ethics training and education for technologists and consumers) and the outside (like governance systems, industry standards and other protection mechanisms installed to mitigate the harms.) Conversations also covered how the social, public and private sectors need to collaborate as custodians of the common good to design AI policy solutions and products to empower and uplift all through reimagining public policy and governance, offering a critical rethinking of political participation, social change, and civic engagement in the age of AI.

The collaborative posited that more rigor in the launch of AI solutions was an essential building block for building trust, the need to balance the urge for speed with the desire to do no harm, and for a system that demands accountability for harm inflicted and causality for problems solved. 

Participants also discussed how to ensure AI is culturally responsive and relevant, and how to use advocacy to include those who have been historically excluded and denied access to the design and development of AI-driven decision-making systems and left out of the future of AI — stressing the importance of inviting communities to define the problem around which AI solutions are built, because the tool is only as strong as its ability to legitimately solve those problems and there should be “no application without representation.”

Not just preoccupied with preventing harm, the sessions also focused on the need to counter knee-jerk negativity and shed light on a positive narrative for AI Exploring how AI could be used to correct inequities and injustices, call AI powerbrokers to account, demand transparency, and ensure the future of AI is justice-oriented and trauma-informed. Participants even hypothesized that it may be time to use language that is less lofty and more rooted in what we can achieve — shifting from ‘AI ethics’ to aiming for ‘responsible and accountable AI.’

These themes will form the backbone of the unique event on June 9th 2022, at Wild Days at the Eaton Hotel in Washington D.C., bringing public policymakers and leading AI experts together to discuss one of society’s most urgent questions: What will it take to build public trust in AI? 

From “data for good” to “data for impact”

In order to actually deliver impact through data science at scale, what needs to change across our sector?

At a recent data.org event, we convened social impact organizations, funders, and data science leaders to explore ways to address this challenge. We sought participants’ insights and gained a clearer sense of what it will take for data to be accessed and applied for good.

What follows are three calls to action that emerged from our conversation. We believe that realizing these calls would catalyze a shift toward scalable, sustainable, and genuinely community-driven projects that help the social good sector use data science to realize impact.


Deepen our commitment to understanding the problem

It’s easy to fall for the flash and glimmer of a new AI solution — but we can’t stop there. We have to deepen our understanding of the problems that we are trying to solve, and our commitment to working with the people and communities that experience real challenges everyday.

This might seem like a small shift, but it’s seismic. It pushes us beyond thinking only about the mechanics of a technical solution and instead challenges us to ask how new technology can change the balance of power in favor of people and communities that have been systematically excluded or harmed.

To be clear, passion for new technical solutions isn’t bad. Many problems we face in the social impact sector do require innovation and creativity. But simply having a new approach doesn’t guarantee actual impact. Our metric for success cannot simply be that we delivered a solution.  That solution must meaningfully contribute to reducing suffering or improving equity.  ––

Doing this isn’t easy. It requires technical experts to diversify their networks and engage with humility. True understanding of social issues cannot be done without community experience and  partnership. Creating technology far from the community it purports to benefit rarely works. Instead, we must partner with communities to develop solutions that are responsive and designed to scale in the real world.

Funders play a critical role in shifting the focus from novel solutions to actual impact. Much of the innovation funding ecosystem currently focuses on building new things instead of investing in long-term capacity building and problem solving. As solution builders, it can be easy to lose focus on the impact you seek in favor of amplifying what will be most attractive to funders. Change makers and funders bear a joint responsibility to honor the complexity and context of the problem at hand and continually seek to deliver impact, not getting distracted by a desire to over index on what might be considered the shiny, data-driven technology of the moment. Disciplined focus on what specific problem data science is helping you understand or address at any one moment in time is essential when unlocking the power of this technology. Without a disciplined approach, the use of data science can be distracting and potentially dilute or derail your impact.

So, we must follow the problem. And one of the things we might learn as we follow it is that the problem is not solvable because of a single data science method. For people coming from data science backgrounds and engineering backgrounds, that means that you might actually have to admit that you maybe aren’t the biggest part of the solution. And that reflection, and the maturity around that reflection, is absolutely critical for figuring out what you can do, for figuring out an angle in, for figuring out an approach or an impact model that actually does speak to the real problem. You have to identify what problem it is that you are capable of solving and find true product-impact fit.

While following the problem seems intuitive, it is inherently very difficult. But it’s urgently necessary if we want to advance and truly use data to drive impact — rather than just giving rise to pilots that explore emerging technologies. As social impact officers, implementers, and funders, we must honor the complexity of the problems that we seek to solve, and be committed enough to fall in love with the actual problems themselves.


Build the muscle for iteration

Advancing our sector also means seeing and supporting projects through to the very end, to where people are applying it to their everyday lives or organizations. It is much easier to build a new product and get it to a Minimal Viable Product stage. But then, to deliver on the impact, you have to actually use the product over time. You have to build the muscle for iteration.

Embracing iteration helps to solve one key challenge social impact organizations face: a lack of clarity around the metric for which they are optimizing. In profit-driven business, it’s much more straightforward: Does a new recommender algorithm, for example, increase engagement, conversions, and then revenue?

But for social impact organizations, measurement and agreement on what the key metrics actually are can make this messier. Building a muscle for iteration means you commit to actually looking at the outcomes of deploying a new method, and that you’re able to regularly and reliably measure those outcomes at a reasonable cost. And like building muscle in the gym, this process requires trial and error — and an ongoing commitment.

Funders have traditionally taken a very linear, more short-term approach to supporting solutions — providing resources to get to the end of an initial pilot, for instance — but the messy nature of achieving impact goals demands that we should be embracing a more iterative mindset and approach. Common case studies for success — like BlueConduit’s data driven approach to helping Flint with its water crisis or GiveDirectly’s efforts to use data science to target cash transfers for COVID-19 relief — all reflect an iterative narrative, reinforcing the ideal process of idea, implementation, and success, with funding and governmental support at every step of the journey. However, those seamless journeys are the exception, not the rule.

The reality of driving impact outcomes is more like life: unpredictable and requiring constant course correction. Imagine an exciting new algorithm that promises to solve hunger in a community. We might expect there to be funding to build the algorithm, have the paper written about it, get press published; but, when it comes to working through the application of it with 20 non-profits with different use cases, we may realize that the algorithm will need continuous refining, and that the exercise of testing and refining will take us in new and unexpected directions around how to effectively serve diverse neighborhoods — or, at worst, that no one needs the technology in its initial form, and we’ll have to go back to the drawing board and build something fundamentally different from the initial solution.

That’s where our current systems for funding and support can fall apart. So, we need solution builders and funders to anticipate and embrace the 2.0’s of the project, the 3.0’s, and beyond. Only through the creation of Minimum Viable Products and its testing phase can we understand that component of the problem statement that we can effectively influence, improve, predict, or make more efficient.


Build capacity and human systems — not just new tech

Sustaining and scaling data science for impact requires a deep commitment to capacity building and technical education. This capacity building must happen across the ecosystem, from implementing organizations, through to funders.

At this stage investing in the capacity of humans is probably the most powerful thing that we can do to move along the transformation curve. Because humans and systems are what actually move the needle on solving problems, investments in human systems ensure that innovation happens at scale, rather than just one thing at a time.

Katharine Lucey, who leads Solar Sister, is a perfect example of what you unlock when you invest in the humans and internal capacity behind a solution. With data.org’s support through the Inclusive Growth and Recovery Challenge, she invested in making sure she had data experts on her team and the budget to support them in the long term. As a result, her work in supporting local women entrepreneurs in Africa who work with clean energy has become a model for how data science can help steer social impact. That evolution is the direct result of investments in capacity.

As another example of building capacity of partners: The Center for Global Action devises a system for locating and measuring poverty. But the step that actually helped people in poverty was getting money to them, and having policy makers who understood this system and could adapt it and move it through. So the CEGA system of data measurements for poverty was important, but only in as much as it enabled a sophisticated, human-driven administrative process that was actually distributing money.

At the end of the day, it will be our subject matter experts who understand the complexity and the context of the challenges faced by the communities seeking to solve problems in their neighborhoods. We have a responsibility to make sure that this type of thinking, learning, and tooling is available.

How do we train more? How do we implement more for more people?

As problem solvers, and funders of problem solvers, there needs to be more consideration of the patience of capital — especially when we’re talking about product-impact fit — and learning around how to fund product roadmaps. We need to be asking not just, “What can the technology do?” but, “How do we train more people? How long can they sustain this work? What else do the people doing this work need? How do we build interdisciplinary teams that have the data skills, technical skills, community insight and subject matter expertise of the problem?”


Funders or impact partners shouldn’t be afraid if any of this sounds overly ambitious or daunting: it’s just a different mindset, and different set of knowledge to acquire. We can all do this together — but to do it, we must change how we build, fund, train, support, and lead the sector moving forward. We must move from being solutions-focused to being problem-focused, from launch-focused to iteration-focused, and from tech-focused to capacity-focused.

These challenges require all of us —innovator, funder, and implementer alike — to contribute. They’re complex challenges, but it’s exactly what data.org was set up to do. For practical information and inspirational ideas to help social impact organizations use data science to solve the world’s biggest problems, check out data.org’s public resource library.

Move slow and fix things: Re-imagining the data for good sector

“How do we [as a sector] scale things?” Danil Mikhailov, Executive Director of data.org, asked attendees at the beginning of the event. “How do we determine whether the impact of funders such as data.org and others on this panel is inadvertently causing issues, and how we might behave or fund differently to support the sector for the long term?”

In early 2021, data.org worked to create a landscape map to surface and categorize the various players in the data for good space. That project, led by data.org Fellow Jake Porway, endeavored “to bring clarity to the conversation by going beyond mapping organizations into arbitrary groups…[and advancing it by] creating an ontology for what Data for Good initiatives seek to achieve.”

Now, in the second half of the year, data.org is building from the findings of the landscape map. The organization is surfacing key provocations and insights to partners and funders that it hopes will catalyze a paradigm shift towards community-led, long-term projects that will support and sustain the social good sector using data to realize impact.


Beyond new, shiny things

Angela Oduor Lungati, event panelist and Executive Director of Ushahidi, a global nonprofit technology company based in Nairobi, shared that in her experience the bulk  of social impact funding tends to go to “new, shiny things.” As a result, there are communities in the developing world that are not reached by those efforts.

“I recognize that emerging technology is very fascinating and it obviously offers excellent opportunities for the ecosystem,” Lungati told attendees. “But we need to keep in mind that there are many parts of the world that cannot access and reap benefits from some of the technological advancements that we’re all pushing for. The fourth industrial revolution — things like AI automation and the internet of things — are emerging simultaneously, while aspects of the third industrial revolution — digitization and internet connectivity — are yet to spread and mature in many parts of the developing world. It would be very helpful to have funding priorities that don’t limit organizations and implementing partners to very specific tools and technologies to achieve their impact.”

Lungati called on funders to lean on the expertise of grantees and implementing partners to make that happen.

“​​Create a model where you can co-create your funding priorities. That way, the people you’re looking to support will be able to offer insight into their content, tech, and their support needs,” she said.

Katherine Lucey, Founder and CEO of Solar Sister, an organization that empowers women entrepreneurs in sub-Saharan Africa to bring clean energy to their communities, said that if funders continue to chase the “front edge of what’s new and what’s sexy,” organizations like hers will not get the funding they need to support underserved and systemically overlooked communities.

“The challenge with this is, as a real practitioner…we are exactly the kind of organization, and our women entrepreneurs are the people who get left behind if we’re not careful,” Lucey said. “As we see so much happening in this data world, we can see people making incredible leaps and bounds of how to use and apply data, but when we look at how we bring that to these rural, grassroots businesses, there’s an enormous gap — and that gap is infrastructure. It’s access to the data itself. It’s access to the technology that they need to access the data.”

Lucey called on funders to consider the whole breadth of the ecosystem.

“So often the resources go into that front end of what’s new, what’s leading-edge, and what pushes the industry forward,” Lucey said. “But if we don’t also remember to look backward, and look at the breadth, and look at the very last mile and look at those who don’t have the same access, we’re not going to move the entire ecosystem forward, and we’re not going to move the entire system forward. We’re going to only stretch out that front end.”

To reach that last mile, Neil Myrick, Global Head of the Tableau Foundation, shared that it will require extra investment, rigor, and focus to engage with those who haven’t already been reached.

“At Tableau Foundation, our role is to leverage the momentum of the company and build the bridge that carries it through to people who wouldn’t normally be reached by that effort,” Myrick said. “Last month, Tableau had our annual customer conference, and the company pledged to train 10 million data people over the next five years. At the same time, Tableau Foundation pledged a $5 million initiative to help ensure that underserved women and girls around the world were included in that. If you want to reach that last mile and people who have not yet been reached, it takes extra funding. It takes intentionality and focus to get there. And so that’s been our role — leveraging the momentum of the company and building the bridge that carries it through to people who wouldn’t normally be reached by that effort.”


Beyond pilots

Myrick said the embrace of long-term, unrestricted funding was a key part of how Tableau Foundation shepherds organizations past the pilot process and towards impact. That approach, he said, means grantees could actually use the funding to meet specific needs that did not pertain to the grant, such as acquiring or hiring help to implement the other parts of their data infrastructure. It also means organizations can receive additional and continued support as they mature to the point where they’re capable of receiving additional funding.

“Building data culture and data infrastructure is a very long-term commitment,” Myrick told attendees. “Our grants are typically a minimum of three years. Our funding is all unrestricted. The first three years really help the nonprofit kind of get their feet under them, but they aren’t necessarily like hitting that hockey stick of data uses in the first three years. Most of our long-term grants have been anywhere from three to six years — and it’s taken into year four and five for some of our nonprofit partners to really hit that hockey stick where data’s become a core component of their overall work.”

“We have a strategy for investing in pilots, but when we invest in the pilot, we have a discussion about where we go with this when it’s successful,” Myrick continued. “And we have a plan for how we’re going to actually invest beyond the pilot before we even do the pilot in the first place. Because our funding’s multiyear, grantees have the sure footedness of our support to count on. And that’s really been an important part of our strategy, and it’s worked quite well.”


Move slow and fix things

Michelle Shevin, Senior Program Manager at the Ford Foundation, maintained that the status quo for social impact funding is shepherding us towards a “fragile future” and called on funders to embrace systems containing “friction” like stakeholder engagement, ethics processes, and, perhaps most importantly, oversight and regulation.

“Just as we’ve seen with digital technology over the past 20 years, moving fast actually does break things,” Shevin said. “If we’re positioning data science and data-driven technologies as something that can be part of moving toward a less fragile future, I’d argue that perhaps counterintuitively, we actually need a lot more friction. We often hear an argument that positions forces like regulation as a counter to innovation, but in reality, sources of friction are necessary in data work. This can look like data sheets for data sets or model cards, or authentic engagement with impacted communities, or ongoing ethics processes. And yes, it could look like oversight and regulation — these are actually critical to sustainable impact and innovation that scales, so perhaps these are even the most important parts of trying to do good with data.”


The unusual suspects

Shevin stressed the importance of engaging and centering impacted communities as a “central” source of this friction.

“It takes social scientists, and people with lived experience, and data scientists working together to actually move more slowly and fix things,” Shevin said. “They could be artists, activists, hackers, or anthropologists, and they all share a commitment to public interest values like equity, accountability, justice, and these public interest technologists are committed to really prioritizing those values and their work.”

“I’d love to see more opportunities to co-create agenda and priorities,” Lungati said in agreement. “Consider the unusual suspects and make them part of the process. Let’s have a conversation around what the issues are and what our needs are or what the needs of other people are, to then guide what the foundations or philanthropies are thinking about investing in.”

Centering those voices and collaborations, Shevin proposed, might lead us to contemplate a new “north star” for innovation and progress.

“If science and technology projects are focused narrowly, and if they’re happening amidst unabated and unmitigated inequality, and climate change, and natural resource extraction and ongoing de-wilding of critical ecosystems, we risk getting stuck trying to point the technology at the symptom instead of empowering people toward a structurally different future,” Shevin said. “We actually need a different north star for innovation and progress as a species. Something, maybe, less focused on economic growth and value extraction, and more focused on interdependence and mutual care. I’m really energized by the idea of making space for different possibilities for the future, and to get there, I actually think we need to slow down.”


Rhythm and shadow

To close the event, Mikhailov asked the funders in attendance to be mindful of the unintended consequences of their grants — and to place as much of a premium on rhythm as one might do on speed.

“Be aware of your shadow,” Mikhailov cautioned. “Be aware of the power you have as a funder when you inadvertently cause harm in the system that you don’t intend — through setting targets or KPIs which lead your grantees to do things that you don’t intend. And as a practitioner of tech as well, I would say worry about rhythm more than speed. It’s not about how fast you get it. It’s about rhythm, and rhythm is different from speed, because it’s about your relationship with one another, as in dance or in sport. Ask yourself, ‘How do you as a tech creator build a relationship with the users of your tech, with the communities who will be affected? And that rhythm is more important than the speed.”


Produced in partnership with data.org. To watch the full event, click here. To learn more about their work, visit data.org. To learn more about the challenges of overreliance on pilots in the social impact sector, read Danil Mikhailov’s op-ed on NationSwell.

Putting pilots under the microscope

There is a tendency in the data for good space for investing in the pilot of a project without seeing the project through. This tendency, which has us investing only in the short term with no thought or plan for taking projects forward, is called pilotitis. Pilotitis is pervasive across the social impact sector. It leaves us with projects that are orphaned or abandoned, and wastes our money and time.

Pilotitis comes from human nature. We embark with good intentions, with pilots that are designed for entirely good reasons — solving a problem, testing hypotheses, using a promising new technology, and seeing if you can be agile in your pursuit of a solution. 

But as you move out of the pilot phase and start actually pursuing your goal, you suddenly start to find difficulties getting the financial support you need. That’s because the people who invest in social impact projects tend to get caught up in pursuing short term objectives. You might find that you and your funders will launch digital interventions together, you’ll get a great headline that generates buzz, you’ll celebrate that hype — and then your funder moves on. 

And if this happens to you over and over again, chances are you’ll move on, too. Welcome to pilotitis. 


At its worst, this tendency is not only wasting time and resources — it actually kills better work. If a number of projects are competing for attention, resources, or influence, often the shiny, short term ones win out because the teams behind them can invest all their energy in the things that make it shiny: the marketing, in the push, in making the first launch successful. You don’t need to take the cost of putting in long-term infrastructure, support and maintenance contracts — all the boring, long-term things that you’d need to make it work. 

This isn’t just a theoretical problem —there are famous cases of pilotitis that became so hairy, national governments have actually had to intervene. In 2012, Uganda had so many pilots of mobile health tech that the whole system was fundamentally overwhelmed: Every hospital, every public health body in Uganda had three, four, sometimes five approaches — often from very well meaning, global north charities and philanthropic bodies — saying, “We can invest this in this app, or in this way of doing things.”

They had so many pilots that it actually hindered Uganda’s ability to run the basic health service in the country. So the national government put a moratorium on the development of any more mobile health projects.

I suspect the cycle is in tune with new technologies. Back in 2012, when the Ugandan government made its ruling, it was the age of the smartphone. The promise of new mobile technology and apps created this feeding frenzy, this excitement. And this hype is a big part of what instigated this problem. Today, all the hype is around artificial intelligence. This new era of pilotitis in our sector will be around AI, the next big data science.

We’ll likely never cure pilotitis completely. It’s human nature. What we can do is minimize it, and make sure it doesn’t disrupt too much. 

If we want to do better, we should start by finding something that excites us in the unsexy act of longer term maintenance and support of good projects. Not everyone in my sector — and the people who fund my sector — knows about the Ugandan case study, so we need a better education about the consequences of getting it wrong. We also need to make it easier to fund the long tail parts of the work that makes it successful long term, like support, maintenance, documentation, testing, community building.


Pilotitis leads to duplicative work: in some cases, ten or twenty of the exact same type of project getting just a little bit of funding. To solve that, we need better relationships between funders. Our funders need to collaborate and cooperate with one another, they need to be transparent about what they’re funding so that they can identify duplicates. They need to start saying, “Instead of us each funding a slightly different version of the same thing, let’s all unite around one technology, and maybe even open source that tech so that others can benefit.”

At the moment, the sector works a lot with one-to-one programs. You get funding to do a great thing with data, and great, that project works. But there is no learning across projects, and the projects don’t build from one another’s successes, failures, breakthroughs, and findings. 

If you want to cross a river, pilotis will have you throwing stones in the water until you have enough at the bottom to be able to wade across. That could take years — if it ever happens at all. We need to be artisans. We need to work together to architect a bridge. A bridge will use far less stones. A bridge will help us get across the river much faster. 

Collaboration is key. At the moment, data.org is launching the Epiverse program for epidemiology. One of the drivers for that, for us taking up this challenge, is that in the world of epidemiology, often the tools to create the models are made locally, within the team. This means the same tools are recreated again, and again, and again at each university. We’re helping the field take a step back, allowing us to invest as a community in open source tools, properly support the maintenance these tools require for long term sustainability, and then make these tools available for free to the community. 

To solve big problems, you need multiple disciplines involved. There is no app that can solve climate change. There is no app that can solve the COVID-19 pandemic. Some problems are just too big. So many of the important problems in our world are too big to be solved by a single discipline, which means you need to work across disciplines, bringing together experts from the physical sciences, social sciences, technology, industry, the social impact sector, policy and government. Interdisciplinarity is the commitment to investing in, and working with, people across these disciplines, creating partnerships and connections that have expertise and fluency in more than one area. 

It’s a powerful tool towards helping to fight against pilotitis because if you don’t understand the subject matter, or hold just one narrow view on the problem, you’re more likely to be mobilized by the flashy rather than the effective. 

We saw a lot of examples of taking the narrow view on the problem at the very beginning of this pandemic, when there were hundreds of attempts by tech startups to build apps to diagnose Covid. But a report from the Turing Institute found that the majority of these apps didn’t work because they didn’t involve a single doctor in the development process. They didn’t take into account the kind of nuances that only a professional knows — and once again, we have another high profile example of a proliferation of pilots that get abandoned. And this is where interdisciplinarity can help. You need interdisciplinary scholars and professionals working within teams to make them more sustainable, to help them go beyond the pilot, to be implemented and to be successful.


Just like you need doctors to help out the techs building COVID-19 diagnostics, so the reverse is also true: getting technical expertise into the decision making process of what gets funded is very important to help mitigate against pilotitis. But that’s easier said than done: Many funders don’t have in-house technology and data experts with the skills and expertise to advise on funding proposals.

In a way, data.org was set up to try and fix that. One of the jobs we have is working with multiple funders as their out of house tech experts. We provide our partners with the tech expertise to help them guide investment into their products. Bringing experts into the decision making process will invariably steer funders away from focusing too much on pilots that don’t go anywhere, placing the focus where it actually needs to be: on sustaining long term projects that actually solve problems.

So this is the data.org prescription for curing pilotitis: don’t be seduced by the hype, invest for the long term, collaborate across funders to unite around shared, open solutions, and build interdisciplinary teams, where technologists and subject-matter experts work through the world’s complexity together. With this approach we can achieve the agile learning and growth that a promising new pilot provides, and build on that momentum for maximum and sustained impact.


Danil Mikhailov is the Executive Director of data.org. To learn more, please join us for an event on advancing the “Data for Good” sector.

Opinion: It’s Time to Tackle the Deluge of Disinformation

Such is the speed with which the internet and social media can amplify misinformation. And to give the truth a fighting chance, we urgently need to make important changes to our digital life. 

As the COVID-19 pandemic has created a unique moment of crisis that left all of us more engaged than ever with digital platforms, the pitfalls of our government’s inability to adequately regulate the tech sector have left the whole world imperiled by the speed with which a lie can travel.


The perils of disinformation are all around us. It was present long before the Trump administration, but exacerbated by a president who lied over 30,000 times in office, and who became surrounded by a political machine and media ecosystem that amplified those lies. This eventually resulted in the “Big Lie” of a stolen election, and a violent insurrection in our nation’s capital on January 6th that attacked the very pillars of American democracy. 

If that wasn’t bad enough, misinformation is hindering vaccination efforts by spreading false rumors about vaccines that could lead to more deaths and the development of new variants that could resist vaccines. These are just two of the most glaring examples of the many societal ills that arise from vast systems of disinformation brainwashing people with lies.

Both the fault and solution largely lie with the big tech companies that have come to dominate our lives. Never in the history of the world have there been companies as powerful as the likes of Facebook, Google and Twitter. Facebook, for instance, has more information on U.S. citizens than the U.S. government. With billions of people using the products of big tech every day, these platforms have an unprecedented ability to influence our world.

These are publicly-traced behemoths who might spout clever marketing slogans like “Don’t be Evil” but in reality they are driven solely by the bottom-line. We are mistaken in thinking that companies like Facebook are social media platforms that help people connect. Their true functions are as advertising companies. Facebook might hide behind the fig-leaf of saying they don’t sell your personal information, but in reality they sell access to you through the vast amount of data points they’ve collected about you. They are “walled garden” that offer companies the chance to micro-target people to sell their products. Your activity on these platforms – and the way they follow you around the internet on your phone or PC – enables these platforms to understand you at a shockingly granular level. So granular in fact, that they can predict your behavior, needs and wants with incredible precision.

Some might be unbothered by this. A new mother might benefit from Facebook seeing her pictures of her baby and targeting her with diapers advertisements. A young artist may feel that all of the content on their feed is supportive of their work, or even inspiring. But of course the algorithms behind such targeting are far more nefarious than just these benign examples.


Guillaume Chaslot was a Google engineer who helped develop the Youtube algorithm that keeps viewers glued to its platform to the tune of one billion hours per day. Initially as a whistleblower, and now researching these platforms, he has documented how the algorithms behind these platforms can send users down a rabbit hole of disinformation trying to keep them hooked: In other words, someone who is a 2nd Amendment supporter viewing related videos on Youtube might find themselves slowly sucked into the world of dangerous, but popular and influential, lies that are Qanon.

We are not powerless against the dangerous speed with which these lies travel, but the road to reducing the amount of disinformation plaguing our society is a challenging one. We have to tackle these issues from all sides: we require increased digital literacy education, regulatory and legislative reform, and the design of innovative new technologies to solve these systemic problems.

We must start with greater regulation of the tech platforms that are radicalizing our societies around the world. A first step should be requiring and enforcing identity checks. It is speculated that more than half of the users on Facebook and Twitter are bots, and these are often the worst offenders in spreading lies, whether they are Russian agents pushing a divisive agenda to undermine American democracy, such as anti-vaxxer propaganda undermining vaccination efforts.There is no reason these anonymous bots should be able to wreak havoc as they do, and getting rid of anonymized accounts will go a long way to clearing them from platforms and amplifying disinformation. 


Revisiting Section 230 of the Communications Decency Act is also essential. This is, however, a highly nuanced undertaking that should try to find a middle-ground that allows these platforms to continue to thrive as forums for free speech but doesn’t allow tech companies to simply hide behind the law and permit disinformation to thrive. Hiring more moderators and investing in technological solutions to police malicious content is a good place to start.

Another important step is to require these big tech companies to pay for the journalism that is featured on their platforms, as has been done recently in Australia. The Australian law is far from perfect, but government regulation there has provided a much-needed lifeline to publications who have seen their model upended by these tech platforms, who rely heavily on the sharing of news stories but don’t compensate publications for it. The disinformation problem is exacerbated when credible publications that can debunk lies are being increasingly marginalized and put out of business.

Civil society and foundations have an important role to play too. The challenge of misinformation is not going to be fully solved anytime soon, even with strong regulation, so we need to be teaching people, especially children, how to identify it and become more conscious consumers of digital content.


The tech companies are feeling the pressure on this issue and are trying to do things to reduce the heat on them. Twitter put caveats on tweets around the election by partisans spreading misinformation, and Facebook and others have invested in both human and technical solutions to weed out bad offenders. But despite seemingly endless financial resources to throw at the problem, that won’t be enough. A recent study showed that disinformation on Facebook is 68% higher in Italy than Ireland because Facebook is better equipped to handle this challenge in English than in other languages. Think about Facebook’s global reach and how many languages there are and the scope of this global issue is even more apparent.

As we get closer to another pivotal election – the 2022 midterm elections – the time is now to reduce the speed by which a lie can travel and give the truth a chance to set the record straight. It is not an exaggeration to say that democracy, science, health and the ties that bind our society are on the line unless we rise to the challenge of this dangerous trend.


Brittany Kaiser was the whistleblower in the Cambridge Analytica scandal and is the founder of the Own Your Own Data Foundation. Ann Ravel is the Director of the Digital Deception project Maplight and the former Chair of the Federal Election Commission. Jeremy Hurewitz is Curation Director at NationSwell.

To Build It Back Better, Nonprofits Must Become Data Guardians

For #BuildItBackBetter, NationSwell asked some of our nation’s most celebrated purpose-driven leaders how they’d build a society that is more equitable and resilient than the one we had before COVID-19. We have compiled and lightly edited their answers.

Data use and ownership is concentrated in a few tightly held hands. Those few hands make decisions about how the data of billions of individuals is used, re-used and shared. Who owns our data and what they do with it are perhaps the two most important questions of our age. Asking individuals to become their own data stewards is asking too much — between baffling usage agreements, online scraping, and more. We need institutions to step forward.

Nonprofits are uniquely positioned to serve this role. As charitable organizations, they are already holders of the public trust. They also have the capacity (with a little work) to understand data at an institutional level – identifying relationships between the harms or vulnerabilities they fight and the population they serve. To see just a few examples of that, look to health-oriented nonprofits that advocate for delivery of federal services based on census data or environmental nonprofits that can monitor air quality and pollution.

In a world where data has become a driver of both opportunity and vulnerability, nonprofits across the spectrum of social change need to equip themselves to serve as champions for data used in just and equitable ways. To build it back better, they must become data guardians for the constituencies they serve.

We see the problem in the “Asterisk Nation” — a name coined by the National Congress of American Indians to describe Indigenous populations who were represented so poorly in federal data sets that they were simply described with an asterisk. The Congress advocates for “accurate, meaningful, and timely data collection in American Indian/Alaska Native (AI/AN) communities.”

We’re beginning to see how cooperatives and nonprofits like NCAI are stepping forward to become data advocates and take control of data gathering and stewardship in these communities to present a new face to federal service providers, and to demand what is just.  This same model could happen across the data landscape if we equip nonprofits with this capacity.

Investment in nonprofits to build data capacity, to understand the interplay between data stewardship and their core missions, and to equip them to become effective advocates can rebalance the existing power dynamic of data stewardship – to move voice and agency into the hands of public institutions, and ultimately into the hands of individuals themselves. For non-profits, it means building data capacity and maturity, whether that is done internally or through partnership. New programs and projects should integrate data planning, stewardship and advocacy — and seek support for these functions. For philanthropy, it means recognizing that building these data capacities is critical for programmatic success and prioritizing support for these efforts.

Among the challenges we should aim to overcome is any sensibility that would suggest data and technology are either too complex to understand, or outside of our control and responsibility. We can make progress on this by first developing our shared vocabulary for what data is, how it’s generated and why and how data is used in the world.

Vilas Dhar is President of the Patrick J. McGovern Foundation and Patrick McGovern is a founding Trustee.  The foundation is a 21st century philanthropy advancing artificial intelligence (AI) and data solutions to create a thriving, equitable and sustainable future for all.