Once seen as a distant prospect, the powerful capabilities of artificial intelligence (AI) have rapidly become a reality. The advent of modern AI, which relies on advanced machine learning and deep learning techniques, has left governments scrambling to catch up and decide how to avoid a litany of threats to society, such as increasingly persuasive propaganda, cyber attacks on public infrastructure, and the capacity for unprecedented levels of surveillance by governments and companies.

Faced with the need to mitigate AI risk, countries and regions are charting different paths. The European Union is leading the way with a comprehensive framework focused on protecting individual rights and ensuring accountability for AI’s makers, while China prioritizes state control at the expense of personal freedoms. The United States is scrambling to catch up as it works on balancing innovation with the need to address safety and ethical concerns.

These varied strategies underscore the challenge of regulating AI — navigating competing priorities while addressing its far-reaching impact. On both national and international fronts, can we find common ground to manage AI responsibly and ensure it serves humanity’s best interests?

One expert at the forefront of that debate is political scientist Allison Stanger of Middlebury College. In addition to her professorship in Vermont, Stanger is an affiliate professor at the Berkman Klein Center for Internet & Society at Harvard University and the author of several books, including the forthcoming Who Elected Big Tech?

In an article published in the 2024 Annual Review of Political Science, Stanger and coauthors explore the global landscape of AI governance, highlighting the challenges posed by AI and its potential threat to democratic systems.

Knowable Magazine spoke with Stanger about how AI can be regulated nationally to serve democratic values and how we can establish a global framework that addresses common challenges. The interview has been edited for length and clarity.

How would you characterize AI threats?

There are two ways to look at it. You have what I like to think of as threats to democracy where artificial intelligence exacerbates existing issues, such as privacy concerns, market volatility and misinformation. And then there are existential threats to humanity, such as misaligned AI [AI that doesn’t behave in alignment with intended human goals and values], drone warfare or the proliferation of chemical and biological weapons.

A common argument is that humans have always been anxious about new tech, and AI is just the most recent development. How is AI different?

I think that’s a valid response for most of human history: Technology changes, then humanity adapts, and there’s a new equilibrium. But what’s different about this particular technological innovation is that its creators don’t entirely understand it. If we’re thinking about other technological breakthroughs, like the automobile, I might not know how to fix my car, but there’s somebody who does. The thing about generative AI is that, while its creators understand neural networks and deep learning — the algorithms that underpin modern AI — they can’t predict what a model is going to do.

That means if something goes terribly wrong, they can’t immediately know how to fix it. It’s this knowledge element that takes us beyond ordinary human capacities to think and understand. In that sense, it’s really like Alien Intelligence.

How could AI make drone warfare and the proliferation of chemical and biological weapons worse?

Existential threats to humanity don’t necessarily mean killer robots: They can just mean AI systems that run amok, that do things they weren’t designed to do, or you didn’t foresee they could or would do. Existential threats will emerge if AI reaches a threshold where it’s trusted to make choices without human intervention.

Drones are a good example. You might think that, OK, fighter pilots just stay at home; we let the computers fight the computers, and everybody wins. But there’s always collateral damage in this type of warfare, and the more autonomy these systems have, the greater the danger.

And then there’s the risk of AI being used to create biological or chemical weapons. The basic issue is how to prevent the technology being misused by bad actors. The same goes for cyber attacks, where just one ordinary hacker could leverage open-source AI models — which means models that are publicly available and can be customized and run from a laptop — to break into all kinds of systems.

“The existential threats will emerge if AI reaches a threshold where it’s trusted to make choices without human intervention.”

— ALLISON STANGER

And how does AI exacerbate more imminent threats to democracy, such as misinformation and market volatility?

Already without AI, the existing social media system is fundamentally incompatible with democracy. To discuss the best next political steps, you need a core sense of people believing the same things to be true, and that’s been blown up by recommender algorithms spawning hateful viral transmissions, disinformation and propaganda. AI just automates all those things and makes it easier to amplify and distort human speech. Automation is also what could bring greater volatility to financial markets, as we now have all these automated AI computer models for financial transactions where things happen rapidly without human intervention.

AI also poses a very real threat to individual autonomy. The best way I can describe it is, if you’ve ever been billed for something incorrectly, it’s almost impossible to get a human on the phone. Instead, you’re going through all these bots asking you questions and going in circles, without being directly served. That’s how I would characterize the real insidious threat from AI: If people increasingly rely upon it, we’re all eventually going to be stuck in this Kafka-esque world that makes us feel super small and insignificant and as though we don’t have basic human rights.

How would you define AI governance?

Governance is deciding how we’re going to work together, on the municipal, state, federal and global level, to deal with this immense new technological innovation that’s going to transform our society and politics.

What legislation or other initiatives have the US implemented to protect against AI threats?

The main initiative has been Joe Biden’s executive order on AI, signed in 2023. The order, which instructs the federal government on what to prioritize and how to shape policy, focuses on ensuring AI is safe, secure and ethical by setting standards for testing, protecting privacy and addressing national security risks while also encouraging innovation and international collaboration. Essentially, it outlines guardrails that sustain democracy rather than undermine it. President Donald Trump has already overturned this order.

The Biden administration also created the AI Safety Institute, which focuses on advancing AI safety science and practices, addressing risks to national security, public safety and individual rights. It’s not clear what the fate of that institute is going to be under the Trump administration.

What national laws do you see as the most important to rein in AI?

We need to make it very clear that humans have rights but algorithms don’t. The national discussion about free speech on online platforms is currently distorted and confused. The Supreme Court seems to have believed that social media platforms are just carriers of information; they’re just transmitting things people post in some chronological way. However, the recent unanimous decision to uphold the TikTok ban suggests their understanding is becoming more accurate.

Everything you see online has been mediated by an algorithm that’s specifically geared to optimize for engagement, and it turns out that humans are most engaged when they are enraged. And we need to hold the company that designed that algorithm liable for any harm done. Corporations have free speech rights. But a corporation is a collection of humans. And that’s different from a machine, which is an instrument of humans.

Has the US made any progress in this direction?

In the United States, we have actually introduced legislation to repeal Section 230 which, put in simplified terms, is a liability shield that says that platforms aren’t publishers and therefore not responsible for anything that happens on them. There’s no other company in the United States besides the technology companies that have this liability shield.

“Corporations have free speech rights. But a corporation is a collection of humans. And that’s different from a machine, which is an instrument of humans.”

— ALLISON STANGER

By having that shield in place, the court hasn’t had to deal with any of these issues and how they pertain to American constitutional democracy. If the proposed legislation passes, Section 230 will be sunsetted by the end of 2025, which will allow First Amendment jurisprudence to develop for our now virtual public square and make platforms liable like any other corporation.

Beyond Biden’s executive order, is there proposed AI legislation in the US?

There’s a lot of already-drafted legislation for AI safety. There’s the Algorithmic Accountability Act, which requires companies to assess the impact of automated systems to ensure they do not create discriminatory or biased outcomes; there’s the DEEPFAKES Accountability Act, which seeks to regulate the use of AI to create misleading or harmful deepfake content; and there’s the Future of Artificial Intelligence Innovation Act, which encourages the study of AI’s impact on the economy, workforce, and national security. We just have to work to make all this proposed legislation reality.

But right now, we’re not focusing enough on that. The US is home to the big technology companies, and what the United States does matters for the world. But AI wasn’t a discussion during the election campaign. We’re also not having the public discussion required for politicians do something about the total absence of guardrails. Europe has been a trailblazer in AI governance, and there’s a lot we can learn from the EU.

What type of regulation has the European Union put in place?

There’s the EU Artificial Intelligence Act, which classifies AI systems into risk levels (unacceptable, high, limited, minimal) and imposes stricter rules on higher-risk applications; the Digital Markets Act, which targets large online platforms to prevent monopolistic practices; the Digital Services Act, which requires platforms to remove illegal content, combat misinformation and provide greater transparency about algorithms and ads.

Finally, there’s the earlier GDPR — the general data protection regulation — which gives individuals more control over their personal data and imposes requirements on companies for data collection, processing and protection. A version of the GDPR was actually adopted by the state of California in 2018.

How do you see us achieving global governance of AI? Should we have international treaties like we do for nuclear weapons?

I think we should aspire to treaties, yes, but they’re not going to be like the ones for nuclear weapons, because nuclear is a lot easier to regulate. Ordinary people don’t have access to the components needed to build a nuclear wapeon, whereas with AI, so much is commercially available.

A table contrasts the approaches of the US, EU and China toward regulation of AI.

This table contrasts the AI governance strategies of the United States, European Union and China, showcasing the US’s market-driven focus on innovation, the EU’s rights-driven emphasis on regulation and privacy, and China’s state-driven approach prioritizing surveillance and party control.

What is the main difference in how China and the US regulates AI?

China has a very clear ethics to its political system: It’s a utilitarian one — the greatest good for the greatest number. Liberal democracies are different. We protect individual rights and you can’t trample on those for the good of the majority.

The Chinese government has a tighter control over companies that are building AI systems there. For example in 2023 China passed its “Measures for the Management of Generative AI Services,” which requires providers to ensure AI-generated content aligns with the government’s core socialist values. Providers must prevent content that could undermine national unity or social stability and are responsible for the legality of their training data and generated outputs.

As there’s a symbiotic relationship between the companies and the state, government surveillance is not a problem: If a company gets your personal data, the Communist Party will get it as well. So China has great AI governance — great AI safety — but its citizens are not free. That’s not a trade-off I think the free world should be willing to make.

How does this difference between authoritarian and democratic systems affect international AI governance?

What I’ve proposed is a dual-track approach, where we work together with our allies on keeping freedom and democracy alive while simultaneously working to reduce the risk of war with non-democracies. There are still things we can agree on with countries like China. For example, we could reach an agreement on no first use of cyber weapons on critical infrastructure.

Now you might say, Oh, well, people will just do it anyway. But the way these agreements operate is that merely talking about and recognizing it as a problem creates channels of communication that can come in very handy in a crisis situation.

Lastly, how do you think the political divide in America, where Republicans tend to support a hands-off approach to business, will affect regulation of AI?

There are real believers in laissez-faire approaches to the market, and Republicans often see the government as a clumsy administrator of regulations. And there’s some truth to that. But that begs the question of who is going to put guardrails in place, if not government. It’s not going to be the companies — that’s not their job. It’s the government’s job to look out for the common good and ensure that companies aren’t overstepping certain boundaries and harming people.

Europeans understand that instinctively, but Americans sometimes don’t — even though they’re often benefiting from government guardrails to ensure public safety. My hope is that we can turn them around without having a large-scale catastrophe teach them through experience.