How do we preserve democracy in the age of AI? [Watch or listen]
As an alliance grows between far-right leaders and tech billionaires, revisiting a talk I gave at Harvard a year ago – a vision for making AI work for regular people.
A note to readers about this talk:
Last Spring, after closing out my service at the White House, I spent several months traveling across the U.S. and speaking with audiences in media and academia, politics and philanthropy, business, foreign affairs, and grassroots organizing. After a rewarding but highly regimented few years working in Washington, it was fun to have the chance to reflect candidly on issues and hear fresh ideas from people all over the country. But one topic dominated the discussions: the serious threats AI—and the people selling it—pose to our democracy.
Those concerns were well-founded. Powerful, unaccountable AI systems were already stripping people of rights and recourse, concentrating power in the hands of a few tech leaders, and quietly accelerating a drift toward illiberalism. These tools weren’t just disrupting industries; they were reinforcing inequality, undermining civil rights, and reshaping the balance of power in ways most people never consented to and couldn’t fully see. I made the case to audiences—including in an essay for Foreign Policy that I co-authored with Janet Haven, the talented leader of the Data & Society Research Institute—that we could head off these threats with bold, democratic governance: laws to protect rights, empower people to contest AI-driven decisions, break up corporate concentration, and ensure that it’s the regular people—not tech CEOs—who decide how these systems are built and used.
None of that happened. In fact, AI’s threats to democracy have only metastasized since Donald Trump returned to power. On his first day back in office, Trump eliminated the few federal rules that President Biden had put in place to keep AI in check and protect people from the technology’s excesses. The executives who talked a big game about the risks of their technologies, and asked lawmakers to set up regulatory guardrails, have almost entirely changed their tune, even lobbying the Trump Administration to block “overly burdensome state laws.” And the president has handed the keys to the U.S. government, including the sensitive personal data it holds about each of us, to the world’s richest man—who this weekend announced the merger of his social media platform with his AI company.
I thought this would be a good time to revisit a short talk I gave last April at Harvard during a convening of the Boston Global Forum, in which I laid out a vision for “resilient democracy in the age of AI.” I know it can feel almost ludicrous—as we fend off overlapping constitutional crises, witness the dismantling of the transatlantic alliance, and fight to preserve basic assistance for the most vulnerable—to engage with big policy visions that seem so disconnected from this reality. But we have no choice; planting and sowing better ideas is as important as stopping the toxic ones from taking root in our country today. Now is the time for political imagination. We must never stop making our case for a democracy that serves all people.
You can watch or listen to this talk, or read the full text below.
Resilient Democracy in the Age of AI (April 2024)
For the last number of years, I’ve sat at the table not just with fellow policymakers, but also with people in Silicon Valley who are developing AI systems, people in academia who are evaluating them, and people in civil society who harbor some fairly serious concerns about how they’re being used.
What I’ve noticed in those overlapping, intersecting spaces is that when we talk about AI and democracy, the same core concerns seem to rise to the surface: deep-fake images in campaign ads, voice-cloning robocalls, election mis and disinformation at scale.
These threats are real. They are urgent and serious and worthy of our attention and intervention. And yet they are fundamentally downstream issues. Consequences of a system and a society that has failed to adequately grapple with earlier, more foundational questions about how we believe AI should be governed.
So there is another class of risks posed by AI to democracy that I want to discuss.
I should note here that I’ll use a few words interchangeably: AI, algorithmic systems, automated tools. I mean colloquially a class of data-driven automated tools that play a role in many facets of our lives.
So the first step in making sure AI supports, and does not undermine, democracy is to understand AI systems as we do any other form of power in a free society. To conceive of this technology as we do wealth, or political influence, or media influence, or legal authority. Which is to stipulate that, like any other form of power:
AI should not be hoarded, but wielded democratically;
It shouldn’t be used at a whim, but after deliberate decision-making;
And it shouldn’t advance the interests of only a select few, but priorities forged by consensus and in the best interests of people.
Let’s call this, for our purposes, the AI power premise. I would propose that this premise is how we should think about power in a free society. And therefore it’s also how we should think about powerful AI systems. So it’s in that context — of understanding AI as power — that I believe we should evaluate AI’s impact on our democracy and determine where we need to intervene.
A physician diagnoses illness by first establishing the range of normal. And if a patient presents with symptoms outside that range, then the patient is sick. So one measure of diagnosing the health of our democracy is considering how close we are to that AI power premise.
We know, of course, that our democracy is presenting far outside the range of normal and that AI tools often aren’t helping.
It should not be normal in a healthy democracy for Black Americans to be arrested and jailed after they are misidentified by a facial recognition tool. But it’s happening in Michigan and New Jersey and Georgia.
It should not be normal in a healthy democracy for local governments to use chatbots which tell local landlords that they can discriminate against tenants who are trying to pay rent through housing vouchers. But that’s happening too, in the City of New York, right now.
It should not be normal in a democracy for us to see almost instantaneous corporate consolidation in the very first months after a new, powerful technology emerges onto the consumer market.
It should not be normal to be able to deploy powerful AI tools to surveil warehouses and hotels workers, to cut their pay or to fire them -- all without a human being ever involved in the decision.
None of this should be normal in a democracy. All of it is corrosive. And yet in many cases it’s happening to us, to our communities, without our knowledge, and without our consent.
Our democracy is sick. AI is not the root cause of the illness. But we need to be forthright that these technologies are making things worse..So I’d like to briefly offer three areas where we can train our focus — to alleviate these threats to democracy and bring us closer to a world where AI is wielded along that AI power premise. These aren’t detailed prescriptions, but prompts for where I believe more work is needed.
First, we must focus on AI’s immediate, discriminatory harms. The ways AI can threaten our civil rights.
There’s been some debate over the last couple of years about whether it’s worth investing in mitigations against AI systems that are trained or deployed in discriminatory ways. The argument suggests that there are bigger, longer-term problems, like AI’s threats to the cyber infrastructure, or robots becoming self-aware.
I find this debate troubling. Algorithmic discrimination undermines multiracial democracy because it breaks the promise of citizenship — that if you participate, you will be represented well, treated well, safe, secure, and prosperous.
So fair treatment by AI cannot be considered a luxury. Any tool that undermines a person’s rights or access to critical services is a threat to democracy. As Vice President Harris said in London last year, an “existential threat.”
In practice, this means we need to understand that AI’s risks to civil and human rights are measurable and then invest proactively in tools that can do that measuring. And if systems don’t pass these tests, or companies and governments don’t use these mitigations, these AI tools should stay on the shelf. Period. And they certainly shouldn’t be baked into a bunch of other products like asbestos to be found at a later date.
These are questions that now lie squarely at the foot of the Congress — and Congress needs to consider a new liability regime and a private right of action for ordinary people — making explicit that the civil rights protections afforded by generations of other laws continue to apply even if they are baked into computer code.
The second area we need to address is the gulf between the priorities and incentives of ordinary people and of the companies building and using these AI systems. A gulf that is growing rapidly.
Last fall, two separate events happened in the world of AI, but I think we should see them as connected.
First, the venture capital firm Sequoia announced quietly that it would stop investing in new large language models. Since 2021, the firm had poured millions into OpenAI and by the end of last year, they’d decided that they were good. I would posit that a capital market that has unilaterally decided it is done investing in the most powerful technology of our time is not a good sign for democracy and competition.
Now, around the same time as Sequoia’s announcement, the Sam Altman/OpenAI board drama was playing out. And we heard all this discussion about how to structure a corporation so that it can do well and do good — do for-profit business and not-for-profit business. This, also, is a bit of a sleight of hand. It muddles intentions and incentives and it’s a recipe for confusion. And confusion does not favor ordinary people.
In our economic system, profit motives will always beat out public interest. That’s fine — that’s just how our system works. But if we are going to establish effective checks and balances on AI, we just need to be really clear that an AI company, like every company, exists first to serve a profit motive and go from there.
This is a question of aligned incentives as much as it is a question of accountability.
The people building and selling these AI tools have made a bold claim: that AI will eventually be able solve problems humanity has never been able to solve before. In science and medicine, engineering and human productivity. I hope they are right. The time-honored tradition of Silicon Valley wunderkinds writing checks they can’t cash should move all of us to proceed with some caution.
But if they are right about the unrealized potential of AI then we need to incentivize a lot more competition and find ways to get these systems solving problems that are important to the public.
If they are right, then we need to have a conversation about getting decision-making about these systems, from capital allotment to R&D to deployment, into the hands of ordinary people — people who are perhaps less concerned with everlasting life, and more concerned with improving conditions in this life. People less worried about colonizing the moon, and more concerned with using powerful tools for interests without an immediate commercial upside, like climate mitigation, or earthquake prediction, or accessibility solutions, or educational access.
Certainly that means making sure we have empowered, well-staffed, independent enforcement agencies — from the Department of Justice to the FTC, CFPB, and SEC. But it also means building out public resources to do some of this work. There are some promising initiatives already happening here: the National AI Research Resource project, for example, is an idea that would scale up access to research, compute, and data infrastructure for folks across society. That needs more money and attention.
We should also keep empowering our public engines of innovation — like our national labs and the new Advanced Research Projects Agency for Health — to dramatically scale up their use of these tools in responsible innovation environments, or sandboxes.
The final area I’d offer is to begin to really view tech and AI issues as part of a broader political movement for social progress.
Maria Cantwell, the chair of the Senate Commerce committee who began her career as a big tech executive in Seattle spoke in an interview this week about what can influence corporate tech interests to take precautions. She said: only “bright lines, where you say ‘you can’t do this or the company will be sued,’ is what is going to get the general counsel and the CEO and the CFO and everybody on the same page.”
I think that’s right. And yet, as we speak, these AI companies are scaling up their government affairs teams, building out sophisticated lobbying efforts, in some cases seeking to prevent exactly these kinds of bright lines.
It would be a mistake to stake a political strategy on the hope that industry will voluntarily step up and do the right thing. De-facto self-regulation, and sweeping legal immunity, has failed us and allowed the tech interests of the 21st century to run roughshod over the interests of ordinary people.
So the time has come for a dynamic, vigorous, and well-organized political movement on tech issues and AI issues.
That starts with political education — to help people see themselves in these issues.
To help people understand that AI tools are linked directly to whether they can access quality housing, education, healthcare, economic opportunity. To move people from abstract detachment to a really granular understanding of what these systems mean in our lives for good and for ill.
My grandmother needs to know to ask, when her insurance premium suddenly spikes with no explanation, whether algorithms may have accessed data she never authorized. My neighbor needs to have the critical consciousness to ask why he is being denied a loan, or rejected from a job, or getting targeted online in a particular way. My organizer friends have to know enough about automated decision-making to show up at the hearings in the state legislatures where so much of this AI battle is being fought.
These steps are what the great educational thinker Paolo Friere distinguished as “acts of cognition, not transferals of information.” AI is famously described as operating within a “black box.” But for AI to work in society, we have to take it out of that box. For democracy to survive AI — indeed for AI to work for democracy — we need a population, an electorate, a society that is critically engaged.
Let me close with something Dr. Alondra Nelson taught me long ago, and something she shares frequently with audiences around the world:
There is nothing fait accompli about technology’s role in our society. We are not beholden to these systems. Each of these tools is programmed and deployed by human beings. We get to decide the way this story unfolds. And that will require us to remember that for all the ways AI is new, it is like any power.
And it will require us to insist, to demand, and to organize, to make sure it is used to keep us safe, to support our highest values, and protect the rights of all people.