Surely you’ve seen photos of soldiers or historical events on Facebook and other social media, shared to honour service or history.
However, experts say, many of these images are fake.
People create or alter them using Artificial Intelligence (AI) to mislead viewers, distort historical understanding, and manipulate emotions.
Know the news with the 7NEWS app: Download today
StopFake, a fact-checking organisation, explained some AI-generated images are designed to evoke strong reactions, garnering likes, shares, and engagement on social media.
Pages first build a large audience with emotional or viral content, then switch topics or post misleading material to reach those followers with new messages.
This allows creators to manipulate emotions, spread misinformation, or even run scams while appearing credible due to their previously popular posts.
Experts warn these images are not just harmless mistakes, they can undermine our understanding of history and our trust in authentic records.
Pages using AI images are leaving viewers vulnerable to manipulation, a review of the Harvard Kennedy School Misinformation (USA) showed.

Several examples of fallen Australian soldiers were analysed with historians, family members and military experts confirming the photos were not authentic, according to AAPFactCheck.
Peter Stanley, historian at the University of NSW, described some of the images of soldiers as “all preposterous fakes … pathetic and contemptible,” adding that “fictitious insignia and weapons appear on every image. I could enumerate half a dozen faults with each.”
Stanley said he feared such technology would soon be used to spread and support unjustifiable interpretations of history.
“Historical understanding depends on the validity of the evidence. When that evidence becomes corrupted, falsehoods can flourish.
“This is a very serious matter for all who care about truthful history.”
Rhys Crawley, UNSW history lecturer, said none of the images posted on social media were “real” pictures of Sgt Blaine Diddams and that he “looked quite different” from the images shared online.
The uniforms, patches, ribbons, and insignia were “all wrong,” he added.


Private Tom Starcevich’s daughter, Lynette Starcevich, confirmed the images were not of her father, while Michael Madden, author on Victoria Cross recipients, noted “those aren’t even Australian uniforms or patches.”
Likewise, images shared of Susan Felsche depict her in the incorrect uniform, while the Facebook post includes a number of incorrect details.
Similarly, David Horner from the Australian National University said, “I’m not aware Susan Felsche was ever in the RAAF and therefore the photo of her in an RAAF uniform cannot be true.”
While details in posts about Private Starcevich and Sgt Diddams appear consistent with those from official sources, the post about Major Felsche includes numerous inaccuracies.
It incorrectly states she was born in Sydney when she was born in Brisbane.
Claims she was killed in a midair collision between two planes in the UK in 1987 are also untrue.
In fact, she was killed in Western Sahara in 1993 when a plane crashed not long after take-off.


Despite the post claiming she was the first woman to die in active military service for the ADF, she wasn’t, instead being the first woman to die on overseas duty since World War II.
“Should you see instances of blatant photo manipulation and the use of fake images, we’d encourage you to report this to the social media provider and seek to have the images removed,” the RSL cautioned the public.
“We’re used to believing what we see … We’re now in a world where that is no longer the case. I fear that truth itself is under threat,” UNSW AI professor Toby Walsh said.
How to spot AI-generated images
According to a study by the University of Chicago, only 60 per cent of Internet users can reliably tell whether an image is AI-generated.
And with the improvement of AI, it’s getting harder to tell real photos from AI creations, but there are clues you can look for.
Start by checking for watermarks in the corners of images, as some AI tools like MidJourney or DALL-E leave traces.
If you can access the metadata, it can also reveal the creation tool, date, or other clues the image may be AI-generated.
Look closely at the image itself.
Backgrounds can appear warped or objects misaligned.
Lighting may be inconsistent, shadows strange, or sunlight unrealistic.
Faces and bodies might show subtle errors like uneven eyes, extra fingers, or limbs bent the wrong way.
Small details can give it away too.
Uniforms, patches, text, or equipment might be wrong or nonsensical.
Repeated textures, duplicate faces, or misplaced objects are common signs of manipulation.
Even pixelation, odd colours, or blurring in weird spots can be a tip-off the image isn’t real.
Being aware of these signs can help you pause before sharing and avoid being misled by fake images online.
Stream free on
