One of the most alarming developments with the application of artificial intelligence is the enhanced ability of deepfake technology. If you’re a PR manager, you should be very concerned.
Deepfake software helps small businesses and solo creators improve their content creation efforts. For instance, voice AI can be used to scale the creation of video tutorials. But there’s a tremendous potential downside to worry about, too, as the potential for abuse is equally significant.
For instance, one BBC report details how its own staff were impersonated in deepfake campaigns. You may also recall the media storm surrounding deepfakes and the media literacy gap that was sparked by Elon Musk sharing an AI-generated video impersonating Kamala Harris.
If this could happen to such high-profile names and public figures, the alarm bells ought to be ringing for PR managers everywhere. Fortunately, there are several steps you can take to protect your brand, including legal action and other tactics. Let’s dive in!
Understanding the Risks of Deepfakes in PR
Deepfakes can be used to wreak havoc in numerous ways, both directly and indirectly. One of the most significant of these is the potential damage to brand reputation.
“Deepfakes present a serious reputational risk to any business,” says Gary Hemming, Owner & Finance Director at ABC Finance. “A fabricated video showing a leader announcing a controversial policy or making an offensive remark could easily spark protests or negative publicity. Even worse, a deepfake suggesting a company is in financial trouble might cause stock prices to plummet before management is even aware of the situation.”
As a PR manager, you’ll have to develop systems that support early detection and rapid responses to these threats. If you don’t, you may run the risk of costly and lengthy lawsuits or increased regulatory scrutiny.
The global nature of today’s media landscape means PR teams must be culturally fluent and ready for cross-border communication. Professionals who engage in international planning, such as coordinating destination weddings or global events, often gain a deeper understanding of how messaging is received differently across cultures. This can be a hidden strength when managing the ripple effects of deepfake content in various regions.
Reputational damage can often be managed by proactivity; however, the damage that can be caused when deepfakes are used for fraudulent purposes is usually much harder to undo.
For example, in 2024, according to this CNN report, a group of scammers posing as high-ranking staff members of a company successfully used deepfaked visuals on a video-conferencing call to trick another finance worker into thinking he was on a call with colleagues, and then into making a fraudulent $25 million payment on account of the instructions he was given on that call.
The technology used was so effective that the victim genuinely believed that the people on the call were colleagues whom he knew.
This type of deception isn’t limited to high-stakes finance. Fraudsters are using similar manipulation tactics in other industries as well. For instance, rental scams have become increasingly common, where fake listings and impersonated landlords use digitally altered images or forged identities to trick potential renters into sending deposits. This highlights that deepfakes and synthetic media are not just a PR problem, but represent a broader digital threat impacting consumer safety and trust across all sectors.
Tackling the Risk of Deepfakes to PR
Now that you know the main threats to look out for with deepfakes, the next step is building effective layers of defence. Here’s how to approach this effectively:
1. Create Digital Detection and Response Systems
Setting up and enforcing effective layers of defence against these threats should begin with the creation of early detection and response mechanisms that allow you to hear crises coming.
“Effective crisis management today demands more than just technology,” says Ian Gardner, Director of Sales and Business Development at Sigma Tax Pro. “While AI-driven tools are invaluable for catching manipulated audio or synthetic media early, they work best when integrated into a broader strategy. This means training your team to understand the psychological impact of deepfakes on stakeholders and ensuring that your communication plan is prepared to respond quickly and transparently. Ultimately, it’s about marrying cutting-edge detection with human insight to stay ahead of these sophisticated threats.”
These tools can also examine metadata, pixel inconsistencies, and other signatures of synthetic media to flag suspicious content before it goes viral. They’re also much more effective at this than humans would be.
As important as having early detection mechanisms is the ability to use social listening platforms to monitor chatter.
2. Create Legal and Ethical Safeguards
PR managers and lawyers must work together to create truly effective responses to deepfake crisis situations. It’s one thing to put out a statement to quell a media storm, but quite another to cover all your bases, including the potential legal fallout.
Legal guidance is crucial to help ensure that any public statements made during a deepfake incident are defensible, reducing the risk of defamation claims or inadvertently validating false narratives. Additionally, legal teams can help PR staff understand the maze of laws that govern issues such as digital impersonation laws, defamation statutes, and intellectual property rights as they relate to deepfakes.
Osbornes Law, for example, underscores the importance of legal recourse when deepfakes cause reputational or emotional harm. Victims can seek compensation through defamation or personal injury claims, or even pursue criminal injury compensation claims in cases involving malicious or fraudulent activity..
3. Crisis Communication Planning for Deepfake Scenarios
While deepfakes pose a unique and growing threat, PR teams can take proactive steps to strengthen their crisis communication plans.
It starts with treating synthetic media threats as a distinct crisis category, recognizing that they can be highly deceptive and spread rapidly. By developing pre-approved holding statements, teams can respond quickly with confidence and avoid the risk of spreading misinformation.
Clear internal protocols that specify who is authorized to speak and how to escalate issues ensure consistent, coordinated responses. Regular crisis simulations, including scenarios that involve social media backlash and rapid media inquiries, prepare teams for the unpredictable nature of deepfake incidents.
4. Strengthening Brand Authenticity and Transparency
Finally, the best long-term defence against deepfakes is simply building a brand that is impossible to impersonate. Build consistent, verifiable messaging across all your owned channels, including your website, social media accounts, and newsletters. Use verified social accounts, watermark sensitive content, and leverage transparent leadership messaging to reinforce credibility.
Proactively communicate your company’s values and build trust with your audience. When stakeholders know and believe in your brand, they’re more likely to question suspicious content and wait for official confirmation before reacting. Building brand trust and authenticity can help create a powerful shield against the impact of false claims.
For organizations seeking a more holistic approach to digital resilience, investing in internal training programs or personal development, such as yoga teacher training in Bali, can foster mindfulness and calmness in high-pressure situations. Such training can also enhance team collaboration and focus during crisis moments.
Final Thoughts
The threat posed by deepfakes is real. PR professionals must be aware that a deepfake crisis could hit at any time, and the damage to a brand’s reputation could be severe, swift, and long-lasting.
That’s why proactive monitoring, legal awareness, and robust crisis communication plans are absolutely essential. With the tips we’ve shared, you’ll be in a great position to anticipate and respond to the threats posed by deepfakes.