Artificial Intelligence Transforms Modern Warfare in Middle East Conflict

Military experts say artificial intelligence is revolutionizing both combat operations and information warfare in the ongoing Middle East conflict. AI technology is helping forces process intelligence faster while also being used to create fake news and propaganda that spreads rapidly on social media.

Artificial intelligence technology is fundamentally changing how modern warfare operates, transforming both actual combat and the battle for public opinion in the current Middle East conflict involving Israel, Iran, and the United States.

Military forces are now deploying AI-powered systems to analyze intelligence data, accelerate targeting procedures, and enhance missile defense capabilities. Meanwhile, social media networks have become flooded with computer-generated images, repurposed video content, automated accounts, and algorithm-boosted propaganda designed to create false narratives as rapidly as weapons can change battlefield realities.

The outcome is a war where speed is crucial not only in aerial and naval operations, but also in digital spaces. The competition now centers on who can establish their version of events first, not just who can launch the initial attack.

Technology strategist John Keith King, who previously served as a US government communications engineer working on critical command systems for senior national leadership, explained that artificial intelligence has already become integrated throughout multiple levels of contemporary military operations.

King spoke with The Media Line about AI’s primary military applications. “One of the most important uses is intelligence fusion,” he stated. According to King, AI technology can quickly analyze massive amounts of satellite photographs, drone recordings, radar information, and intercepted communications, enabling military commanders to locate missile facilities, track troop deployments, and discover hidden infrastructure “with much greater speed and accuracy.”

This explanation matches what officials and news reports have disclosed, though the specific systems currently being used remain classified. During Operation Epic Fury, which commenced on February 28, US and allied forces targeted Iranian command centers, air defense systems, missile and drone launching sites, and military airfields. Admiral Brad Cooper, who leads US Central Command, subsequently revealed that sophisticated AI technologies were assisting American forces in processing vast amounts of information more quickly, while emphasizing that human operators retain final authority over shooting decisions. A DefenseScoop report from March 11 noted that the command did not reveal which specific AI systems were being utilized.

King emphasized that AI’s most significant battlefield role involves enhancing rather than replacing military commanders by speeding up their ability to observe and comprehend situations.

“AI is also heavily used for target identification and tracking,” King described, explaining that computer vision technology can recognize vehicles, weapon systems, aircraft, and other equipment from drone or satellite footage and then continuously monitor them in real-time.

This capability proves especially valuable in the type of combat environment currently characterizing the region, according to King.

“The region is characterized by missile arsenals, drone warfare, and dispersed military infrastructure,” King observed. AI technology, he noted, assists analysts in tracking mobile missile platforms, locating drone launch areas, and detecting patterns that might signal an approaching attack, significantly speeding up detection and response capabilities.

Regarding Israel’s AI usage, public information remains limited and disputed. International media reported in April 2024 that the United States was investigating claims that Israel had employed AI to select bombing targets in Gaza. The Israel Defense Forces rejected allegations of using AI systems to identify suspected Hamas members, stating that information systems only served as tools helping human analysts with target identification. Additional reporting in 2025 alleged that US technology companies had supplied AI and cloud computing services to the Israeli military during conflicts in Gaza and Lebanon, contributing to a significant expansion in AI and computing support used for faster target tracking. While this doesn’t definitively prove how these tools were applied in the current Iran conflict, it suggests Israel entered this regional escalation with already expanded AI-enabled military capabilities.

King noted that AI is also becoming increasingly incorporated into both offensive and defensive systems that characterize this conflict.

“Another major application is in autonomous and semi-autonomous platforms,” he explained, pointing out that numerous drones and loitering weapons employ AI-assisted navigation, object identification, and threat avoidance to search extensive areas, identify targets of interest, and transmit targeting information while reducing operator workload.

“AI also plays a growing role in defensive systems,” King added. Missile defense networks, he explained, depend on machine learning to detect approaching threats, eliminate radar interference, and prioritize interceptions, often within seconds.

This evaluation corresponds with the broader characteristics of the conflict. CENTCOM has described the campaign against Iran as heavily concentrated on drone and missile infrastructure, and officials have stated that the United States has needed to defend against large-scale retaliatory attacks while rapidly striking launch sites and command centers. Cooper said AI tools were helping leaders “cut through the noise and make smarter decisions faster than the enemy can react,” while stressing that final engagement authority remained with humans.

While AI is accelerating military decision-making processes, it’s producing similar effects in information warfare.

Yael Moshe, who leads an OSINT team and serves as an intelligence product specialist for the Israeli Defense Ministry’s Coordinator of Government Activities in the Territories, said the digital aspect of warfare is no longer secondary. It has evolved into its own battlefield, powered by AI-created content and social media virality.

“I call it digital psychological terrorism,” Moshe told The Media Line. She said actors like Iran are utilizing AI and recycled footage to overwhelm platforms such as TikTok and Instagram, targeting younger audiences with manufactured realities, including false images of Tel Aviv in ruins and exaggerated portrayals of Iran’s military capabilities.

Moshe explained that these campaigns operate simultaneously on two levels.

“This serves two distinct arenas: manufacturing a fake ‘victory picture’ for Iran’s domestic audience, while simultaneously sowing fear globally,” she observed.

Multiple reports have documented this pattern. A pro-Iran propaganda network has employed AI-generated misinformation and Epstein-related conspiracy theories to promote anti-US and anti-Israel messages to large online audiences. Fabricated AI content about the Iran conflict has also circulated widely on X, including manufactured visuals of attacks, false battlefield scenes, and manipulated imagery amplified by prominent accounts.

Moshe argued that much of this material is technically basic but operationally successful because it spreads faster than fact-checking can occur.

“When we talk about fake news, we mostly see two simple tricks,” she described: old videos from Syria or even video games are relabeled as current attacks, while AI generates false images of Israeli cities on fire. “It takes them 10 seconds to make, but by the time we prove it’s fake, millions of people have already seen it and believed it.”

The danger increases when such content escapes fringe channels and reaches broader audiences, Moshe warned.

Moshe said she personally remains unaffected by such content because she understands ground reality and recognizes psychological warfare tactics. However, “the true danger arises” when fabricated material spreads across social media and “bleed[s] into mainstream media.” That, she cautioned, is when “a localized lie becomes a dangerous global narrative.”

This dynamic has become more apparent as the conflict has expanded. AI-generated images have falsely claimed to show captured US soldiers in Iran, while old footage has been recirculated as new strikes on Tel Aviv. These examples demonstrate that information warfare involves not only persuasion, but also saturation: flooding feeds so rapidly and extensively that verification becomes reactive rather than preventive.

Moshe also highlighted the role of platform architecture itself.

“Seeing people cheer when missiles are fired at us is frustrating,” she said, but added that platforms like TikTok and X reward extreme and hateful content because it generates views. She also noted that much apparent support for such content is artificially amplified: “A lot of this cheering isn’t just real people—it’s fake accounts and bots pushing this hate on purpose to make it look like the whole world supports it.”

She noted that false reports about Israeli leaders, including Prime Minister Benjamin Netanyahu, being killed are part of the same psychological strategy.

“Spreading fake news about Israeli leaders dying is a classic psychological trick,” she said. The goal, she added, is both to create panic within Israel and to provide audiences in Iran or Gaza with a false “victory” to celebrate.

She also described how unrelated global trends are deliberately exploited to expand reach. “And as for the Epstein files, since everyone in the world was searching for it, they started putting Epstein hashtags on their anti-Israel videos. They did this just to ‘hijack’ or jump on the trend and expose it to millions of completely unrelated people so they could see their propaganda. Plus, it’s a way to connect Israel to crazy global conspiracy theories.”

Many international outlets similarly discovered that pro-Iran networks had used Epstein-related content as part of a broader disinformation system connected to the war.

What emerges from both military and digital fronts is the same fundamental reality: algorithmic acceleration. On battlefields, AI is helping militaries detect threats, identify targets, filter radar interference, and compress the time between detection and action. Online, it’s helping propagandists generate synthetic evidence, capture attention, and create illusions of consensus or victory before facts can be verified.

King cautioned that even on the military side, this speed introduces serious dangers.

“While artificial intelligence can improve precision and situational awareness on the battlefield, it also introduces new strategic risks,” he observed. As AI reduces detection and response times, he said, human deliberation decreases, raising the risk of rapid escalation if systems operate faster than political leaders can intervene.

He described the broader transformation in dramatic terms.

“Artificial intelligence is becoming the central nervous system of modern warfare,” he stated. By combining data from satellites, drones, electronic intelligence, and battlefield sensors into a real-time operational picture, AI compresses “the time between detection, decision, and action,” making wars increasingly influenced by algorithm-assisted decision cycles rather than traditional command timelines.

The same acceleration is now occurring online. On social media, false information can now be created, amplified, and accepted before journalists, officials, or analysts have time to debunk them. On battlefields, AI may help identify launchers, prioritize intercepts, or accelerate strike planning. In both areas, the defining characteristic is velocity.

As King concluded: “AI will not replace military leadership, but it will increasingly shape how quickly leaders must make decisions.”

And as Moshe cautioned, the challenge is no longer only what occurs on the ground, but how rapidly false information about it can become accepted reality.

More from TV Delmarva Channel 33 News