News Industry Faces Strike Over AI Use as Technology Reshapes Journalism

Friday, February 27, 2026 at 12:45 AM

Journalists at ProPublica are threatening to strike over artificial intelligence policies, marking what could be the first newsroom labor dispute centered on AI technology. The conflict highlights growing tensions as news organizations rapidly adopt AI tools while grappling with transparency, job security, and accuracy concerns.

The journalism industry finds itself racing toward an AI-driven future, wrestling with fundamental questions about technology integration, transparency with audiences, and the fate of displaced workers.

These concerns took center stage as ProPublica reporters organized picket lines this month, moving closer to what experts believe could be the first newsroom strike primarily focused on artificial intelligence policies.

Industry observers predict this won’t be an isolated incident.

Artificial intelligence has certainly benefited journalists by streamlining complicated processes and reducing time spent on routine tasks, especially for data-heavy reporting. News outlets are deploying AI to analyze documents like the Epstein files, generate headline suggestions, and create story summaries. Automated transcription has nearly eliminated manual interview typing, and even basic Google searches now incorporate AI technology.

However, the rush to implement AI solutions in a financially struggling industry has led to multiple embarrassing corrections and retractions.

Over the past year, Bloomberg published several corrections for errors in AI-created news summaries. Business Insider and Wired were compelled to pull articles attributed to a fictional writer named Margaux Blanchard. The Los Angeles Times encountered problems with AI-generated opinion content. Ars Technica discovered AI had invented quotes, and the publication—which regularly covers AI risks—compounded its embarrassment by failing to follow its own disclosure policies.

The ProPublica labor dispute stands out because it addresses issues sparking debates across the industry. The union representing ProPublica’s journalists is negotiating its first contract with the investigative news organization and seeks commitments about transparency and human oversight in AI implementation—demands echoing throughout the profession.

Beyond organizing informational pickets, union members voted overwhelmingly to authorize a strike if negotiations fail, according to Jen Sheehan, spokesperson for the New York Guild representing the journalists.

“It feels to me pretty monumental when we think about the trajectory of AI and journalism,” said Alex Mahadevan, an expert on the topic at the Poynter Institute journalism think tank.

ProPublica has declined the union’s requests, according to labor representatives. The company’s position reflects arguments made in a widely circulated essay titled “Something Big is Happening” by author and investor Matt Shumer, who spent six years developing an AI startup. Shumer wrote that technology advances so rapidly that “if you haven’t tried AI in the last few months, what exists today would be unrecognizable to you.”

This rapid evolution explains why news executives hesitate to commit to written guarantees that could quickly become obsolete.

Instead of making potentially unkeepable promises, ProPublica is investigating how technology might expand opportunities for investigative journalism, company spokesman Tyson Evans explained. Should AI-related layoffs occur—which Evans called unlikely—ProPublica proposes enhanced severance packages for affected employees.

“We’re approaching AI with both curiosity and skepticism,” Evans said. “It would be a mistake to freeze editorial decisions in a contract that will last years.”

Among 283 contracts at American news organizations negotiated by NewsGuild-USA, 57 include artificial intelligence language, according to union president Jon Schleuss, whose organization represents more journalists than any other nationwide. These provisions first appeared in 2023, with The Associated Press among early adopters. Schleuss advocates for expanding such contract language.

Progress faces obstacles, given many outlets’ reluctance to accept binding restrictions. Trusting News, an organization encouraging news companies to develop and publicize AI policies, estimates fewer than half of U.S. outlets have done so.

“I think it is becoming harder,” Schleuss said, “because too many newsrooms are being run by the greedy side of the organization and not by the journalism side of the organization.”

The guild pushes for contracts guaranteeing AI won’t eliminate positions—an unsurprising stance for organizations designed to protect employment. Schleuss frames proposals requiring human journalist involvement in AI use as error prevention measures that build reader trust.

“Humans are actually so much better at going out, finding the story, interviewing sources, bringing back the relevant pieces, asking the hard follow-up questions and putting that in a way that people can understand and see, whether it’s a news story or a video,” he said. “Humans are way better at doing that than AI ever will be.”

Not all journalism professionals share this perspective. Chris Quinn, editor of Cleveland’s Plain Dealer, recently expressed frustration with a college graduate who rejected a job offer after being taught that AI harms journalism.

Quinn’s publication sends reporters to gather quotes and information from interviews, then feeds that material to computers for article writing. While humans edit the computer output, reporters lose a crucial element—using professional judgment to craft storytelling—from their responsibilities. Quinn justified this approach as optimal resource management.

Research indicates most American consumers consider it extremely important for newsrooms to disclose AI use in writing stories or editing photographs, said Benjamin Toff, director of the Minnesota Journalism Center at the University of Minnesota. The catch: such transparency decreases rather than increases reader trust in the outlet’s content.

A substantial minority—30% in Toff’s recent study—opposes any AI use in journalism.

Informing readers about AI involvement proves more complicated than it appears. “There are just so many, many uses of AI in journalism, from the very beginning of the reporting process to when you hit publish, that just broadly declaring that when AI is used in the newsgathering process that you have to disclose it, just seems like it is actually a disservice to the reader in some cases,” Poynter’s Mahadevan said.

Two New York state legislators—representing the nation’s publishing hub—introduced legislation this month mandating clear disclaimers when artificial intelligence contributes to published content. Passage prospects remain unclear, though both Democratic sponsors serve in a Democrat-controlled legislature.

Mahadevan supports policies requiring human involvement—such as editing to prevent mistakes. However, even these requirements invite interpretation, he noted. When outlets deploy chatbots for reader inquiries, do humans edit those responses?

“Speaking realistically, the newsroom of the future is going to look completely different than it does today,” he said. “Which means people will lose jobs. There will be new jobs. So I think it’s important that we are having these conversations right now because audiences do not want a newsroom completely taken over by AI.”

More from TV Delmarva Channel 33 News