Skip to content
Suche

Latest Stories

Top Stories

Preparing for an inevitable AI emergency

Microchip labeled "AI"
Eugene Mymrin/Getty Images

Frazier is an assistant professor at the Crump College of Law at St. Thomas University. Starting this summer, he will serve as a Tarbell fellow.

Artificial intelligence is advancing at a speed and in ways that were unanticipated even by the foremost AI experts. Just a few decades ago, AI was largely theoretical, existing primarily in the realms of science fiction and academic research. Today, AI permeates nearly every aspect of our lives, from the algorithms that curate social media feeds to the autonomous systems that drive cars. This rapid advancement, while promising in many respects, also heralds a new era of uncertainty and potential peril.


The pace at which AI technology is evolving outstrips our ability to predict its trajectory. Breakthroughs occur at a staggering rate, often in areas previously deemed infeasible or far-off. For instance, the development of GPT-3, an AI language model capable of producing human-like text, astonished even seasoned AI researchers with its capabilities and the speed at which it surpassed its predecessors. Such rapid advancements suggest that the future of AI holds both immense potential and significant risks.

One of the most pressing concerns is the increased likelihood of emergencies exacerbated by AI. More sophisticated AI could enable more complex and devastating cyberattacks, as malicious actors leverage AI to breach security systems that were previously impenetrable. Similarly, advances in AI-driven biotechnology could lead to the creation of more deadly bioweapons, posing new and unprecedented threats to global security. Moreover, the rapid automation of jobs could lead to widespread unemployment, causing significant social disruption. The displacement of workers by AI could further entrench economic inequality and trigger unrest, as societies struggle to adapt to these changes.

Sign up for The Fulcrum newsletter

The likelihood of an AI emergency paired with our poor track record of responding to similar emergencies is cause for concern. The Covid-19 pandemic starkly highlighted the inadequacies of our constitutional order in emergency responses. The pandemic exposed deep flaws in our preparedness and response mechanisms, demonstrating how ill-equipped we are to handle sudden, large-scale crises. Our fragmented political system, with its layers of bureaucracy and competing jurisdictions, proved unable to respond swiftly and effectively. This deficiency raises serious concerns about our ability to manage future emergencies, particularly those that could be precipitated by AI.

Given the profound uncertainty surrounding when and how an AI accident might occur and the potential damage it could cause, it is imperative that AI companies bear a significant responsibility for helping us prepare for such eventualities. The private sector, which stands to benefit enormously from AI advancements, must also contribute to safeguarding society against the risks these technologies pose. One concrete step that AI companies should take is to establish an emergency fund specifically intended for responding to AI-related accidents.

Such a fund would serve as a financial safety net, providing resources to mitigate the effects of AI emergencies. It could be used to support rapid response efforts, fund research into preventative measures, and assist individuals and communities affected by AI-driven disruptions. By contributing to this fund, AI companies would acknowledge their role in creating technologies that, while beneficial, also carry inherent risks. This approach would not only demonstrate corporate responsibility but also help ensure that society is better prepared to respond to AI-related crises.

The establishment of an emergency fund for AI disasters would require a collaborative effort between the private sector and government. Congress could mandate contributions from AI companies based on their revenue or the scale of their AI operations. This would ensure that the financial burden of preparing for AI emergencies is shared equitably and that sufficient resources are available when needed. To safeguard the proper use of the funds, Congress should establish an independent entity tasked with securing contributions and responding to claims for reimbursement.

In conclusion, the rapid advancement of AI presents both incredible opportunities and significant risks. While we cannot predict exactly how AI will evolve or what specific emergencies it may precipitate, we can take proactive steps to prepare for these eventualities. AI companies, as key stakeholders in the development and deployment of these technologies, must play a central role in this effort. By contributing to an emergency fund for AI disasters, they can help ensure that we are equipped to respond to crises in a legitimate and effective fashion.

AI models are being built. Accidents will come. The question is whether we will be prepared to respond in a legitimate and effective fashion.

Mehr lesen

Young man looking angry at display of his smartphone.

The inflammatory rhetoric, meaningless speculation and lack of fact checking by the media may result in young adults rejecting traditional platforms in favor of their well-being.

urbazon/Getty Images

By focusing on outrage, the media risks alienating younger audiences

Rikleen is executive director of Lawyers Defending American Democracy and the editor of “Her Honor – Stories of Challenge and Triumph from Women Judges.” Beougher is a junior at Amherst College and a co-founder ofStudents Strengthening American Democracy.

As attacks on democracy and the rule of law continually increase, much of the media refuses to address its role in intensifying the peril.

Instead of asking hard questions and insisting on answers, traditional media outlets increasingly trade news and facts for speculative commentary that ignores a story’s contextual significance. At the same time, social media outlets and influencers stoke anger as an alternative to thoughtfulness.

Keep ReadingShow less

Athens, GA., bookstore battles bans by stocking shelves

News Ambassadors is working to narrow the partisan divide through a collaborative journalism project to help American communities that hold different political views better understand each other, while giving student reporters a valuable learning experience in the creation of solutions reporting.

A program of the Bridge Alliance Education Fund, News Ambassadors is directed by Shia Levitt, a longtime public radio journalist who has reported for NPR, Marketplace and other outlets. Levitt has also taught radio reporting and audio storytelling at Brooklyn College in New York and at Mills College in Oakland, Calif., as well as for WNYC’s Radio Rookies program and other organizations.

Keep ReadingShow less
Woman looking off into the distance while holding her mobile phone

Seeing a lie or error corrected can make some people more skeptical of the fact-checker.

FG Trade/Getty Inages

Readers trust journalists less when they debunk rather than confirm claims

Stein is an associate professor of marketing at California State Polytechnic University, Pomona. Meyersohn is pursuing an Ed.S. in school psychology California State University, Long Beach.

Pointing out that someone else is wrong is a part of life. And journalists need to do this all the time – their job includes helping sort what’s true from what’s not. But what if people just don’t like hearing corrections?

Our new research, published in the journal Communication Research, suggests that’s the case. In two studies, we found that people generally trust journalists when they confirm claims to be true but are more distrusting when journalists correct false claims.

Keep ReadingShow less
FCC seal on a smart phone
Pavlo Gonchar/SOPA Images/LightRocket via Getty Images

Project 2025: Another look at the Federal Communications Commission

Biffle is a podcast host and contributor at BillTrack50.

This is part of a series offering a nonpartisan counter to Project 2025, a conservative guideline to reforming government and policymaking during the first 180 days of a second Trump administration. The Fulcrum's cross partisan analysis of Project 2025 relies on unbiased critical thinking, reexamines outdated assumptions, and uses reason, scientific evidence, and data in analyzing and critiquing Project 2025.

Project 2025, the Heritage Foundation’s policy and personnel proposals for a second Trump administration, has four main goals when it comes to the Federal Communications Commission: reining in Big Tech, promoting national security, unleashing economic prosperity, and ensuring FCC accountability and good governance. Today, we’ll focus on the first of those agenda items.

Keep ReadingShow less
Taylor Swift singing and playing the piano

Taylor Swift performs on July 27 in Munich, Germany.

Thomas Niedermueller/TAS24/Getty Images

I researched the dark side of social media − and heard the same themes in ‘The Tortured Poets Department’

Scheinbaum,is an associate professor of marketing as Clemson University.

As an expert in consumer behavior, I recently edited a book about how social media affects mental health.

I’m also a big fan of Taylor Swift.

So when I listened to Swift’s latest album, “The Tortured Poets Department,” I couldn’t help but notice parallels to the research that I’ve been studying for the past decade.

Keep ReadingShow less