AI is Rewriting the Rules of Wartime Evidence
- Harman Kohli
- 23 hours ago
- 4 min read

On March 18, 2026, Israeli Prime Minister Benjamin Netanyahu began releasing a series of video clips with the single purpose of proving he was still alive. Yet, after each video, public skepticism only grew. What was supposed to be a routine demonstration of Netanyahu’s persistence became a case study for the erosion of video evidence during wartime. To say the world is living in unprecedented times is an understatement. A world leader drinking a cup of coffee would not usually cause international outrage and security concerns.
Rumors of Netanyahu’s death first came to fruition on March 2nd after the Islamic Revolutionary Guard Corps (IRGC) released a statement claiming that they targeted Netanyahu’s office. This was swiftly denied by Israeli authorities, but the prime minister suddenly left the public eye in the following days. On March 12, he held his first press conference on Zoom at an undisclosed location during ongoing Iranian missile attacks. The footage of the Prime Minister had low quality and prompted social media users to examine it deeper. Users started to point out the appearance of the sixth finger on the Prime Minister’s hand, which is a common mistake made by AI-generated content.
The Iranian State immediately began to capitalize on the optics of the war. The Tasnim News agency, which is run by the IRGC, published an article titled, “New Video of Netanyahu Proves Fake,” listing different tells of evidence of possible fabrication. The narrative began to spread beyond the media as users began to circulate screenshots of AI detection tools such as X’s Grok that would go on to cite "signs like static coffee levels, unnatural lip sync” in his next video and even labeling the footage as a “deepfake.” Euronews’ fact checking team concluded that the footage was authentic, yet this conclusion was derived from their objective detection tools. Verification expert Tal Hagin said AI detectors “are searching for discrepancies and based on probability”. Someone holding their hand at a weird angle can create a false flag the same way a video can. Regardless of the authenticity of the March 13 footage, there is a concerning level of ambiguity in validating AI media.
Prime Minister Netanyahu’s office once again tried to reconcile the public speculation with a video of him at Cafe Sataf in outer Jerusalem on March 15th. In the video, he is drinking a coffee and holding up his fingers as a rebuttal to the previous claim of AI adding a sixth finger to his hand. This video was labeled AI by Grok, and once again social media users began to find inconsistencies with the video. Users discuss how the liquid levels of the coffee were inconsistent, a crease in his jacket pocket randomly jumps out forward, and generally questionable levels of shadowing and lighting in the video.
Following this, another video was posted of the prime minister interacting with some fellow Israelis, in which many users were quick to point out a ring on his finger disappearing for a few frames and then reappearing.
For the final clip he appeared alongside the U.S Ambassador to Israel, Mike Huckabee, who had a playful conversation with Netanyahu about checking to see if he was alive, making a joke about the fingers again, and began discussing a punch card of future Iranian personnel that will be targeted. Once again, Grok labeled this as AI.
Netanyahu’s case was one of many among the conflict, as within the Iranian narrative there have been renderings of fake missile strikes on Tel Aviv, as well as videos circulating of Iranians chanting for support towards Prime Minister Netanyahu. These were both flagged as fabricated by FakeReporter, an Israeli fact checking organization. However, the organization themselves acknowledged that even their detection methods can be incorrect at times. This brings up a larger problem in that the public has no reliable way to verify what is AI and reality when monitoring the conflict. Though Metadata verification tools such as SynthID and C2PA can aid in authenticating footage when initially distributed, none have been integrated into the major social media platforms in which viewers are receiving war related content. Without public access to these tools, the average person is left relying on inconsistent AI detectors or their raw instinct, which is often also perpetuated by who decides to control the narrative first.
On March 19, the Israeli Prime Minister hosted an in person press conference in Jerusalem with the words “ I am alive, and you are all witnesses.” Compared to other videos, this has been the most verifiable evidence of Netanyahu’s livelihood in the past few weeks. Still, users are not sure what to believe anymore, and even with this press conference, many are pointing out how the press was viewing Netanyahu from a separate room and even found another potential AI glitch in his jacket sleeve. Whether this new video closes the question or not, this sequence of events has exposed a glaring vulnerability in truth seeking. Once AI tools entered the conflict in the information war, video footage shifted from a generally accepted form of evidence to a gray area in which authenticity must be proven. The question for future conflicts is not whether AI will be used to manipulate wartime narratives. Instead, it is a question of whether governments and media platforms can build verification infrastructure capable of keeping pace with the growing capabilities of AI. Based on the events of the past three weeks, the answer remains far from settled.


.png)



Comments