Protesters Are Fighting to Stop AI, but They’re Split on How to Do It

PauseAI protests are underway in London, New York, San Francisco, and across the globe. Its members have wildly different opinions on what the group should do next.
Image may contain Architecture Building Clock Tower Tower People Person Clothing Footwear Shoe Adult and Parade
Protesters working in Artificial Intelligence gathered in London in 2023.Photograph: ZUMA Press, Inc. / Alamy Stock Photo

On a side street outside the headquarters of the Department of Science, Innovation and Technology in the center of London on Monday, 20 or so protesters are getting their chants in order.

“What do we want? Safe AI! When do we want it?” The protesters hesitate. “Later?” someone offers.

The group of mostly young men huddle for a moment before breaking into a new chant. “What do we want? Pause AI! When do we want it? Now!”

These protesters are part of Pause AI, a group of activists petitioning for companies to pause development of large AI models which they fear could pose a risk to the future of humanity.

Other PauseAI protests are taking place across the globe: In San Francisco, New York, Berlin, Rome, Ottawa, and a handful of other cities.

Their aim is to grab the attention of voters and politicians ahead of the AI Seoul Summit—a follow-up to the AI Safety Summit held in the UK in November 2023. But the loosely organized group of protesters itself is still figuring out exactly the best way to communicate its message.

“The Summit didn’t actually lead to meaningful regulations,” says Joep Meindertsma, the founder of PauseAI. The attendees at the conference agreed to the “Bletchley Declaration,” but that agreement doesn’t mean much, Meindertsma says. “It’s only a small first step, and what we need are binding international treaties.”

The group’s main demand is for a pause on the training of AI systems more powerful than GPT-4—it’s calling for all countries to implement this measure, but specifically calls out the United States as the home of most leading AI labs. The group also wants all UN member states to sign a treaty that sets up an international AI safety agency with responsibility for granting new deployments of AI systems and training runs of large models. Their protests are taking place on the same day as OpenAI announced a new version of ChatGPT to make the chatbot act more like a human.

“We have banned technology internationally before,” says Meindertsma, pointing to the Montreal Protocol, a global agreement finalized in 1987 that saw the phaseout of CFCs and other chemicals known to deplete the ozone layer. “We’ve got treaties that ban blinding laser weapons. I’m pretty optimistic that there is a way in which we can pause.”

One protester at the London march, Oliver Chamberlain, says he’s not sure that companies committing to pause their AI research is likely, but he feels so concerned about the future that he was compelled to protest. Only substantial regulation on AI would make him feel more optimistic about the situation, he says.

There is also the question of how PauseAI should achieve its aims. On the group’s Discord, some members discussed the idea of staging sit-ins at the headquarters of AI developers. OpenAI, in particular, has become a focal point of AI protests. In February, Pause AI protests gathered in front of OpenAI’s San Francisco offices, after the company changed its usage policies to remove a ban on military and warfare applications for its products.

Would it be too disruptive if protests staged sit-ins or chained themselves to the doors of AI developers, one member of the Discord asked. “Probably not. We do what we have to, in the end, for a future with humanity, while we still can.”

Meindertsma had been worried about the consequences of AI after reading Superintelligence, a 2014 book by philosopher Nick Bostrom that popularized the idea that very advanced AI systems could pose a risk to human existence altogether. Joseph Miller, the organizer of PauseAI’s protest in London was similarly inspired.

It was the launch of OpenAI’s large language model Chat-GPT 3 in 2020 that really got Miller worried about the trajectory AI was on. “I suddenly realized that this is not a problem for the distant future, this is something where AI is really getting good now,” he says. Miller joined an AI safety research nonprofit and later became involved with PauseAI.

Bostrom’s ideas have been influential in the “effective altruism” community, a broad social movement that includes adherents of long-termism: the idea that influencing the long-term future should be a moral priority of humans today. Although many of PauseAI’s organizers have roots in the effective altruism movement, they’re keen to reach beyond philosophy and garner more support for their cause.

Director of Pause AI US, Holly Elmore, wants the movement to be a “broad church” that includes artists, writers, and copyright owners whose livelihoods are put at risk from AI systems that can mimic creative works. “I’m a utilitarian. I’m thinking about the consequences ultimately, but the injustice that really drives me to do this kind of activism is the lack of consent” from companies producing AI models, she says.

“We don’t have to choose which AI harm is the most important when we’re talking about pausing as a solution. Pause is the only solution that addresses all of them.”

Miller echoed this point. He says he’s spoken to artists whose livelihoods have been impacted by the growth of AI art generators. “These are problems that are real today, and are signs of much more dangerous things to come.”

One of the London protesters, Gideon Futerman, has a stack of leaflets he’s attempting to hand out to civil servants leaving the building opposite. He has been protesting with the group since last year. “The idea of a pause being possible has really taken root since then,” he says.

Futerman is optimistic that protest movements can influence the trajectory of new technologies. He points out that pushback against genetically modified organisms was instrumental in turning Europe off of the technology in the 1990s. The same is true of nuclear power. It’s not that these movements necessarily had the right ideas, he says, but they prove that popular protests can stymie the march even of technologies that promise low-carbon power or more bountiful crops.

In London, the group of protesters moves across the street in order to proffer leaflets to a stream of civil servants leaving the government offices. Most look steadfastly uninterested, but some take a sheet. Earlier that day Rishi Sunak, the British prime minister who six months earlier had hosted the first AI Safety Summit, had made a speech where he nodded to fears of AI. But after that passing reference, he focused firmly on the potential benefits.

The Pause AI leaders WIRED spoke with said they were not considering more disruptive direct action such as sit-ins or encampments near AI offices for now. “Our tactics and our methods are actually very moderate,” says Elmore. “I want to be the moderate base for a lot of organizations in this space. I’m sure we would never condone violence. I also want Pause AI to go further than that and just be very trustworthy.”

Meindertsma agrees, saying that more disruptive action isn’t justified at the moment. “I truly hope that we don’t need to take other actions. I don’t expect that we’ll need to. I don't feel like I’m the type of person to lead a movement that isn’t completely legal.”

The Pause AI founder is also hopeful that his movement can shed the “AI doomer” label. “A doomer is someone who gives up on humanity,” he says. “I’m an optimistic person; I believe we can do something about this.”