Defending against AI fakes
24
February
2026
1
min read

Misinformation, fake articles, deceptive videos and voice clones have exploded online. It’s critical organisations are prepared before they’re targeted.
Noticed any fake news articles or videos in your socials feed lately? Seen pictures that, at a glance, look believable, but upon further consideration are a bit too ‘unreal’?
Picking what’s real and what’s not online is becoming more challenging. Technology once used by specialists, film makers and newsrooms is now in the hands of almost everyone.
Last week, a false online news article – complete with photos depicting a combative interview between veteran finance journalist, Alan Kohler, and the CEO of Commonwealth Bank, Matt Comyn – went viral.
However, the interview and alleged confrontation never happened.
Alan Kohler has since described how it confused people, prompting many to send it to him asking if it was correct. It prompted the respected journalist to declare “social media is dead”, in the sense that audiences can no longer believe what they see on social platforms.
Internationally, AI-generated fake content is even more prolific. The conspiracy theory that Jeffrey Epstein (who died in jail in 2019) is alive has been given a fresh push thanks to AI-generated content, spurring on rumours he is hiding in Italy. A widely shared photo purporting to show him walking down a street in Tel Aviv is also a fake. In this case, the nonsensical street signs are a giveaway.
AI platforms that create videos have become so sophisticated, even Hollywood studios may be spooked. A 15-second fake video of Tom Cruise and Brad Pitt fighting on top of a building was widely shared this month; the only way to realise it isn’t real is a small digital watermark.
Managing your reputation in the new world of AI
This week, as a test, the York Park Group team asked Google agent Gemini if it fields a lot of questions about AI fakes. It responded:
I definitely do. In fact, questions about AI fakes have become one of the most common things I’m asked about in 2026. The volume of deepfakes online has exploded – some experts estimate there are now over 8 million deepfake videos circulating.
Later, in its detailed answer, it added:
… especially with major global elections happening this year, people constantly ask me to verify “leaked” audio of politicians.
Misinformation, financial scams, and embarrassing and fake photos and videos are legitimate and concerning issues, and this new frontier can also have massive ramifications when it comes to protecting an organisation’s reputation.
A fake picture or video can go viral in seconds. So, it’s critical organisations are prepared before they’re targeted.
1. Consistency in communications
Sharing regular communications, including content online, that gives your stakeholders a benchmark for who you are, what you stand for, and what your business is really like is not only important for your brand and reputation, but critical when it comes to discrediting an AI fake.
Authentic content that makes people familiar with you will make them suspicious of scams that don’t fit with your usual style. It will help them sense something that doesn’t ring true, to pause and think – ‘That doesn’t sound/look/feel like something they would do’.
Make sure your customers and stakeholders know what your ‘true voice’ is, through consistent messages across your channels.
2. Ensure your Crisis Communications Plan has Holding Statements
Organisations can’t afford to be slow to respond if a deepfake video of their CEO making scandalous or damaging statements is spreading online. Such false content can cause irreversible damage in minutes.
Holding statements should not deny the existence of false material, rather they should seek to educate the public on the concept of deepfakes or voice clones – something that makes it clear a manipulated or fake video is circulating, and there will be more information to come once you know more about how or why it was created.
A holding statement should not repeat falsehoods as this can give them traction, rather they should make it clear a misleading AI video is circulating.
These holding statements should be shared through as many ‘owned’ communication channels as possible, and organisations should also make it clear where verified information will be shared, that people can go to for the facts.
3. Create a double check culture
Even an organisation’s own employees can be fooled by a sophisticated video or voice cloned phone call.
In 2024, a British engineering firm fell victim when a worker was tricked into transferring huge sums of money during a hoax video meeting that used fake voices and images of senior company executives.
Organisations can create protocols for staff to check with a direct phone call if they receive an unusual request in a video meeting, or create a different process to double check directions via a secondary and pre-arranged channel.
In the event staff are unsure about a direction in a video meeting, they should ask the person to wave their hands across the camera – artificial services still make mistakes with movement and rendering ‘layers’.
Another tip: Human AI images in motion sometimes blink mechanically or too perfectly, or not at all.
4. Location verification (and why you should keep receipts)
If a deepfake video shows an executive at a certain location, look for inconsistencies and then point them out in public statements.
For instance, does it show a business leader on sunny day in Sydney? Their diary and other records might prove they were in fact on their way to a meeting in overcast Melbourne.
In a crisis, you may be able to immediately share details from travel documents with trusted news organisations, to help them fact check and dent the momentum of a viral deepfake moment.
Even a timed and dated lunch receipt, parking ticket, driving log, or timestamped message can offer proof of someone’s whereabouts.
It’s somewhat comforting even in the world of high-tech AI, a boring printed real-world receipt can come out on top.
23
February
2026

Copyright was built for humans. AI has changed the context
Read news article
23
February
2026

Copyright was built for humans. AI has changed the context
Download White Paper
10
February
2026

Regional aviation in Australia: The key drivers behind current government reviews
Read news article
10
February
2026

Regional aviation in Australia: The key drivers behind current government reviews
Download White Paper


.webp)
