(Liberty Shield Network) –
Many U.S. military veterans say they have long been wary of traditional media, after years of watching partisan coverage, pundit spin and half-told stories. But researchers, cybersecurity experts and fact-checkers warn that while attention is focused on cable news and social feeds, a newer and more adaptable threat has moved into the information fight: artificial intelligence.
They say AI is no longer just a tool for convenience or entertainment. It is increasingly being used as a “weapon system” in its own right — supercharging cyberattacks, generating convincing deepfakes, tailoring scams to individuals and helping foreign and domestic actors probe for soft targets in public opinion and critical infrastructure.
Domestic actors are using similar tactics. Only one example: After the assassination attempt on former President Donald Trump, fabricated posts appeared online claiming to come from a Secret Service agent named “Jonathan Willis,” alleging that agents were ordered to stand down. Officials later said no such employee exists. A doctored photo also circulated, altering agents’ facial expressions to make it appear they were smiling after the attack. This is the sort of dangerous disinformation that continuously circulates in and out of Americans’ feeds every single day.
Bots, deepfakes and a flood of synthetic content
Cybersecurity firm Imperva has estimated that bots now account for roughly half of all internet traffic. Some run customer-service chats or automate routine tasks. Others mimic human behavior to spread false narratives, manipulate opinion or steal personal data.
Veterans’ groups and digital security advocates say military and veteran communities are frequent targets. Accounts that appear to honor service or share “insider” intel can, on closer inspection, show classic signs of automation or deception: generic usernames, stock or AI-generated profile photos, awkward language, nonstop posting at odd hours and a lack of any local details tied to the place the account claims to be from.
- An AI-generated “leak” from inside Area 51, tapping into decades of UFO lore and conspiracy culture.
- A supposed breaking-news video claiming Los Angeles fires were started by flame-throwing drones, built from a stock “breaking news” graphic, an altered voiceover and repurposed Ukrainian drone footage.
In each case, experts say, the content spread because it was dramatic, surprising or emotionally charged — not because it had been verified.
Extremists and state actors turn to AI
Terrorist organizations and foreign governments are also experimenting with AI-driven tactics. In one recent example, the Islamic State circulated a 92-second video after an attack on a Russian concert venue that appeared to feature a human news anchor in uniform downplaying the violence. The “anchor,” later determined to be AI-generated, spoke in a polished, authoritative style meant to reassure sympathizers and mislead observers.
Separately, a Microsoft analysis found that China and Russia have deployed AI to generate deceptive images, videos and persona accounts ahead of U.S. elections, with the goal of amplifying social fractures and undermining trust in democratic institutions. Veterans are among their priority targets, researchers say, because they are seen as trusted voices in their communities and deeply attuned to national security issues.
Earlier this year, OpenAI reported that hostile state-aligned actors — including Russia, China, Iran and others — had run influence operations using AI tools to run bots, generate talking points and build fake media sites that closely mimic legitimate outlets, all aimed at shaping public opinion without firing a shot.
Voice-cloning tools add another layer. With just a few seconds of audio, AI systems can now generate speech that fools even trained ears. Recent incidents have involved AI-generated voices impersonating senior officials and public figures in calls, robocalls and online videos. Authorities and experts warn that cloned voices can be used to solicit money, passwords or sensitive information — or simply to inflame political tensions.
Veterans’ trust and leadership imagery as targets
For veterans, analysts say, the risk is not just generic misinformation but the weaponization of military symbols and leadership. One recent example involved a video that appeared to show retired Lt. Gen. Steven Blum criticizing political leaders and demanding the release of sensitive documents. The clip used a real 2007 photo as a base and AI tools to animate his face and generate speech.
Small visual glitches — shifting insignia, moving patches, off-sync lip movements — revealed the clip as synthetic, but only to those who looked closely. Experts say such content aims to exploit the deep trust service members place in the chain of command and the uniform itself.
“Leadership voice” is also being mimicked. AI tools like Grok, and many others, can generate realistic headlines, quotes and even avatar videos in seconds. In demonstrations, users have created images of public figures announcing fictional campaigns or staged alien invasions of major U.S. cities — scenarios that are easy to recognize as satire. The same tools, applied to real-world crises, can create content that is far harder to distinguish from reality.
“Verify before you amplify”
Media-literacy advocates and veteran organizations say the goal is not to scare veterans away from technology, but to encourage them to apply familiar discipline to a new battlespace.
They recommend:
- Interrogating the source. Before sharing, ask who is behind the message, who benefits from it and whether multiple credible outlets are reporting the same thing.
- Watching and listening for glitches. In video and audio, look for odd pacing, flat emotional tone, repeated phrases, shifting details or visual artifacts that suggest digital alteration.
- Checking with independent tools. Reverse-image searches such as Google Lens can reveal where a picture first appeared and whether it has been recaptioned. Dedicated fact-checking projects and media-literacy organizations offer searchable databases of debunked rumors and explainers on how manipulated content is created.
- Treating emotion as a warning flare. Content that immediately triggers anger, fear or vindication is often designed to do exactly that. Experts suggest slowing down when a post feels “too perfect” in confirming existing beliefs.
Advocates say veterans are uniquely equipped to adapt. They are used to reading terrain, spotting patterns and verifying intelligence before acting on it. The same instincts can help protect their communities online.
The message, they say, is simple: the battlefield has shifted to screens, but the mission is familiar — stay aware, check the intel and verify before amplifying any claim, no matter how real it looks or sounds.
The post What you need to know about AI and how it’s deceiving us first appeared on Liberty Shield Network.
—
Read more Liberty Shield Network News


