Artificial intelligence is reshaping nearly every corner of our lives — from how we work and learn to how we participate in civic life. Yet as the technology races ahead, governance has struggled to keep pace.
During a candid panel discussion at NationSwell Summit on October 22 hosted by David Gelles — New York Times reporter and author of Dirtbag Billionaire — Mike Kubzansky, CEO of Omidyar Network, and Miriam Vogel, CEO of EqualAI, explored how society can steer AI toward the public good. Kubzansky emphasized the urgent need for public oversight and values-based regulation that puts people, not profit, at the center of innovation, while Vogel highlighted practical steps organizations can take now — from embedding accountability in everyday workflows to cultivating ethical reflexes inside teams — to ensure AI serves all communities equitably.
A full recap of the panel’s insights can be found below.
Takeaways:
“We shouldn’t expect profit-driven companies to prioritize the public good — that’s not how capitalism works. If we want AI to serve society, we have to build the incentives and accountability to make that happen.”
— Mike Kubzansky, CEO, Omidyar Network
- Artificial intelligence isn’t a future issue — it’s a governance crisis happening in real time. The world is deploying AI faster than it can define what “good governance” means. Most organizations use AI in some capacity, but few have internal standards, accountability systems, or a clear understanding of what responsible use looks like.
- Every technological revolution has had a societal reckoning — except this one. From pharmaceuticals to nuclear energy, past innovations prompted debate and regulation. In the digital era, no such collective framework exists, leaving critical decisions to private companies and market forces rather than shared values or public consent.
- The real gap isn’t technical — it’s institutional. There are no common definitions, standards, or liability frameworks for AI use. As a result, companies set their own rules, often inconsistently. Building shared norms and accountability mechanisms is now as urgent as any technical breakthrough.
“You cannot regulate your way out of this; governance starts inside the organization. Every company needs to know how it’s using AI, who’s accountable, and what happens when something goes wrong.”
— Miriam Vogel, President and CEO, EqualAI
- Governance begins inside organizations, not in Congress. External regulation alone can’t ensure safe or ethical AI. Companies need internal “AI hygiene”: clarity about where AI is used, who is accountable, and how issues are surfaced and resolved. Without internal governance, regulation becomes meaningless.
- Regulation does not stifle innovation; confusion does. Rules provide clarity, not constraint. Some of the most innovative economies operate under strong governance frameworks. Real innovation thrives in environments where safety, trust, and transparency are built in from the start.
- Public trust is collapsing, and AI literacy is the cure. Half of Americans report being more afraid than excited about AI. Those who understand it are more optimistic, suggesting that AI literacy — not hype or fear — is the foundation for responsible adoption and social trust.
- Profit-driven systems won’t self-correct. Expecting companies to prioritize ethics over revenue misunderstands capitalism’s incentives. Governance must come from a mix of policy, investor expectations, and board accountability — ensuring AI’s social license to operate.
- There’s still time to design responsible AI — but only if we demand it now. Responsible AI isn’t theoretical: it requires clear accountability, transparent testing, and leadership ownership. The companies that get this right will be the ones that earn both consumer trust and long-term viability.
"