Whether it’s a future of work powered by software that supports workers and businesses alike, technology infrastructure to manage sustainable supply chains, improving digital access and safeguards for our democratic process, or removing bias in data and AI platforms that impact marginalized communities, the actions we take in the present to invest in equitable digital platforms will determine whether our collective grasp will ever extend outwards to our collective reach.
During a NationSwell virtual roundtable on May 25th, a group of cross-sector leaders gathered to discuss the role of emergent technologies like generative AI have to play in advancing that impact and what leaders can do to implement ethical digital and technical solutions in order to scale solutions and provide equitable access.
Here are some of the key takeaways:
Organizations must stay disciplined when it comes to asking larger questions about who they’re using AI to serve and what they hope to accomplish. In order to bridge gaps between intent, strategy, and the actual digital products and services that end up being built, companies must establish clear mandates and decision matrices about how to best serve the populations they’re working for. One of the first steps to guaranteeing alignment should be to make sure that clear lines of transparency and clear moral imperatives are present throughout the entire organization.
The adoption of an “ethical ombudsman” can help to ensure a shared ethical responsibility. Rather than adopting “shiny new tech” just for the sake of doing so and then allowing the ethical buck to get passed on to the tools themselves, companies and individuals should take a more active role in assuming the ethical burden by creating a new position designed specifically to oversee projects at the organizational level and evaluate the potential risks and harm that new technologies can pose to individuals and communities.
Train new systems with humanitarian concerns, not just technological ones. The tech we use broadly everyday (internet, social media, etc.) was created by a relatively small group of people, technologists who are good at making things, but not necessarily experts at holistically considering the ecosystems and people that will use that tech in everyday life. To solve for this gap, we’ll need to build better and more intentional methods to ensure that public interest is baked into design — potentially by hiring folks with humanitarian backgrounds to serve as model trainers and by ensuring more cross-sectionality in design phases.
Drawing distinctions between the types of potential harm that new technologies can cause will be critical to mitigating the damage. We need to think about potential technological harm as falling into two distinct categories: acute harms and institutional harms. The former includes harms done to the individual, while the latter includes harms to communities and populations. These different types of harm will require unique interventions, and getting clear on which is which will be the first step to any mitigation strategy.
Pathways to widespread adoption of potentially transformative new technologies must be established in order for underserved populations to thrive. In addition to ensuring pathways to adoption, it’s also imperative to bring in the people who stand to be most affected by the digital divide during the design process and incorporate their feedback into the build. Bringing boots on the ground into the regulation process and having the right people around the table to help in the decision making can also be a way to reduce inherent bias.