Bulldog Reporter

Deepfake
AI deepfakes in 2025: Global legal actions taken this year
By Anna Zoey | September 16, 2025

In 2024, deepfake fraud reached alarming levels, with half of all businesses reporting cases involving AI-altered audio or video. Against this backdrop, the EU Artificial Intelligence Act (AI Act) officially came into force last year. By mid-2025, it had already banned the most harmful uses of AI-based identity manipulation and introduced strict transparency requirements for AI-generated content. 

The EU’s landmark regulation is not an isolated effort. Around the world, governments are enacting new laws to curb the misuse of AI in identity fraud and deepfake production. From the United States to China, 2025 is shaping up to be a defining year for AI and deepfake regulation.  

In this article, we will explore these laws: from the US to China, we will look at the most prominent AI deepfake regulations in 2025. 

Denmark’s Deepfake Law 

One of the most notable developments in the fight against AI deepfakes comes from Denmark. In mid-2025, the government introduced an amendment to its copyright law that establishes every person’s right to their own body, facial features, and voice. In practice, this treats an individual’s likeness as a form of intellectual property—a groundbreaking approach in Europe. With strong cross-party backing, the proposal is currently under public consultation and is expected to be enacted by the end of 2025. 

Under the amendment, creating or sharing any AI-generated, realistic imitation of a person—whether their face, voice, or body—without consent would be illegal. Victims would gain the explicit right to demand takedowns, while platforms that fail to comply could face “severe fines,” according to Denmark’s culture minister Jakob Engel-Schmidt. 

Denmark has also announced plans to leverage its EU Council presidency in late 2025 to advocate for similar protections across the bloc. If successful, this national initiative could serve as a blueprint for Europe-wide deepfake regulation, shaping the continent’s legal response to AI-driven identity manipulation for years to come. 

The United States’ TAKE IT DOWN Act 

For years, the U.S. response to deepfakes relied mainly on state-level laws and civil lawsuits. That changed in May 2025, when the TAKE IT DOWN Act was signed into law—the first federal legislation directly targeting harmful deepfakes. 

The law zeroes in on non-consensual intimate imagery and impersonations, including deepfake pornography, sexual images, or any AI-generated media that falsely depicts a real person in a damaging way. It also criminalizes the knowing distribution of nude or sexual content without consent, whether genuine or AI-generated. Offenders face fines and prison terms of up to three years, with the maximum penalty reserved for aggravated cases such as repeat offenses or distribution with intent to harass. 

Crucially, the Act also imposes new duties on platforms. If an individual reports explicit deepfake content of themselves, the platform must remove it within 48 hours. By May 2026, all platforms hosting user content must implement a clear notice-and-takedown system for intimate imagery. 

The TAKE IT DOWN Act is only the start. As of August 2025, several other federal bills are moving through Congress to reinforce it: 

  • DEFIANCE Act (Disrupt Explicit Forged Images and Nonconsensual Edits) – Reintroduced in May 2025 after an earlier version expired, this bill would give victims of non-consensual sexual deepfakes a federal civil cause of action, with statutory damages up to $250,000. 
  • Protect Elections from Deceptive AI Act – Introduced in March 2025, it would ban the distribution of materially deceptive AI-generated audio or video about federal election candidates. 
  • NO FAKES Act – Introduced in April 2025, this proposal would make it unlawful to create or distribute an AI-generated replica of someone’s voice or likeness without consent, with narrow exceptions for satire, commentary, or reporting. 

Together, these measures suggest that U.S. lawmakers are moving toward a layered federal framework—one that protects individuals from exploitation, safeguards democratic processes, and regulates commercial use of AI-generated likenesses. 

China’s AI Content Labeling Regulations 

In March 2025, Chinese authorities introduced the Measures for Labeling of AI-Generated Synthetic Content, a regulation that will take effect on 1 September 2025. Building on earlier rules from 2022–2023, the new framework establishes a traceability system for all AI-generated media. 

The regulation applies to every form of synthetic content—images, video, audio, text, and even VR environments—and requires it to be clearly labeled in two ways: 

  • Visible labels: such as watermarks on images or captions in videos marking them as AI-generated. 
  • Invisible labels: such as encrypted digital signatures embedded in metadata, detectable by algorithms even if the visible marker is removed. 

For example, if a user creates a celebrity face-swap video in a Chinese app, the output must carry both a visible notice and an encrypted watermark. 

Platform Obligations 

Content platforms are also required to proactively detect watermarks. If a file lacks them, the platform must prompt the uploader to declare whether it is AI-generated. In addition, the law bans watermark removal tools outright—making any attempt to strip or tamper with AI identifiers illegal. 

Fallback Mechanism 

If a piece of content cannot be verified but is strongly suspected to be AI-made, platforms must label it for viewers as “suspected synthetic.” 

With this two-layered approach, China has set one of the strictest global standards for AI transparency—ensuring that both creators and platforms bear responsibility for marking synthetic media. 

France’s AI Content Labeling Regulations (In Progress) 

In late 2024, the French National Assembly introduced Bill No. 675, which would require clear labeling of any AI-generated or AI-altered images posted on social networks. By early 2025, the proposal had gained traction: 

  • Individuals who fail to label manipulated photos or videos could face fines of up to €3,750. 
  • Platforms that neglect their detection or flagging obligations could face fines of up to €50,000 per offense. 

As of mid-2025, the bill has not yet been adopted, with government discussions still underway. 

While labeling rules remain pending, France has already taken steps against harmful deepfakes. In 2024, lawmakers passed Article 226-8-1 of the Penal Code, which criminalizes non-consensual sexual deepfakes. The law prohibits making public, by any means, sexually explicit content generated by algorithms reproducing a person’s likeness without consent. Penalties include up to two years in prison and fines of €60,000, with harsher sentences in aggravated cases. 

Together, these measures illustrate France’s dual-track approach: a pending push for broader AI content transparency, alongside already-enforceable protections against the most harmful forms of synthetic media. 

Developments in the United Kingdom’s Online Safety Act (In Progress) 

In the UK, 2025 has been the year of implementing the Online Safety Act 2023, a landmark law designed to curb harmful online content. While the Act was passed in late 2023, many of its most significant provisions only came into effect during 2024 and 2025, with additional refinements now under discussion. 

The Act already prohibits the sharing—or even the threat of sharing—intimate deepfake images without consent. However, it originally stopped short of criminalizing the creation of such material. 

That gap is now being addressed. Proposed 2025 amendments would directly target creators of non-consensual sexually explicit deepfakes. Intentionally generating such content—whether to cause alarm, distress, humiliation, or for sexual gratification—would carry penalties of up to two years’ imprisonment. 

These developments underscore the UK’s effort to expand the scope of its online safety regime: moving from punishing distribution alone to holding creators of harmful synthetic media legally accountable as well. 

Fighting Deepfakes with Regula’s Solutions 

The wave of new deepfake laws in 2025 makes one thing clear: governments expect stronger safeguards against AI-driven identity fraud. On the technology side, this demand is accelerating the adoption of advanced verification solutions with liveness detection, which remain highly effective at exposing deepfakes. 

One such solution is Regula Face SDK, a cross-platform biometric verification toolkit designed to secure digital interactions against identity manipulation. It provides: 

  • Advanced facial recognition with liveness detection – Real-time verification that distinguishes live users from spoofing attempts using photos, videos, or masks. 
  • Face attribute evaluation – Analysis of age, expressions, and accessories to strengthen accuracy and security. 
  • Signal source control – Protection against tampered input streams, reducing the risk of deepfake injection attacks. 
  • Robust adaptability – Reliable performance in a wide range of ambient lighting conditions. 
  • 1:1 face matching – Direct comparison of a user’s live image against their ID document or database entry for precise identity verification. 
  • 1:N face recognition – Scanning and identifying individuals against large databases, enabling quick identification across multiple records. 

By combining these capabilities, Face SDK empowers organizations to stay ahead of fraudsters while keeping verification smooth and user-friendly. 

 

Anna Zoey

Anna Zoey

Anna been in the content game for over a decade, tackling B2B and B2C like a pro. She knows what works, what clicks, and how to make content that actually matters.

Join the
Community

PR Success
Stories from
Global Brands

Latest Posts

Demo Ty Bulldog

Daily PR Insights & News

Bulldog Reporter

Join a growing community of 25000+ comms pros that trust Agility’s award-winning Bulldog Reporter newsletter for expert PR commentary and news.