Disclaimer: This article is general information, not legal advice. Rules and remedies depend on facts, country, and platform.
A deepfake usually arrives the same way: someone sends it “as a joke.” Your face, your voice, your mannerisms — stitched into a scene you never lived. For a moment it is just content. Then it spreads. Screenshots start circulating. Someone tags your employer. Suddenly you are not dealing with “AI.” You are dealing with consequences.
The legal question is not philosophical. It is practical: who can be held responsible when a manipulated video causes real harm?
Deepfakes are not one legal category
Courts and regulators rarely care about the label “deepfake.” They care about the effect.
So the first step is to identify what the content is doing:
- Does it make you identifiable (face, voice, name, account tags, context)?
- Does it mislead viewers into thinking you said or did something?
- Does it create reputational damage, harassment, or safety risks?
- Is it sexualised or designed to shame you?
- Is it being used commercially (ads, promotions, monetised pages)?
Same tool. Different legal routes.
Who can be “on the hook”
Responsibility tends to cluster around control: who made it, who posted it, who kept it up.
1) The creator (or whoever commissioned it)
This is the cleanest liability target. They made the thing. Even if they hide behind anonymous accounts, anonymity is a procedural obstacle, not a defence. If you can identify the person, the case usually becomes simpler.
2) The uploader and the amplifiers
Sometimes the creator isn’t the person pushing it. Someone else uploads it, captions it, and invites a particular interpretation. That matters. Presentation is often the harm.
There is a difference between “sharing a weird edit” and publishing it as if it is true. Captions like “look what she did” or “expose him” move the post from silly to legally relevant very quickly.
3) The platform
Platforms sit in the middle. They will often act like neutral pipes. They are not. They rank, recommend, monetise, and scale.
In Europe, platforms generally have more room to argue limited liability than individual posters do. But that room shrinks once they are clearly notified and the content is obviously unlawful or plainly abusive. Notice changes the posture. It turns “we didn’t know” into “we chose to leave it up.”
4) The tool provider
This is harder and usually not the first move. Most disputes focus on creator/uploader/platform because intent and control are easier to show there.
Tool-provider liability tends to require something extra: design that predictably enables abuse, weak safeguards sold as a feature, or a specific link between product choices and foreseeable harm. It can happen. It is not the fastest path in most real cases.
Legal routes in Europe that actually bite
If you are in the Netherlands (or broadly in Europe), the strongest claims tend to come from privacy/data protection, personality rights, and civil liability logic (defamation/unlawful publication). The labels differ by country. The structure is similar.
Data protection: your face and voice are personal data
A deepfake that uses your likeness is usually processing personal data. “Your photos are online” does not mean anyone can reuse them for any purpose. The question becomes: what is the legal basis for processing? In real life, deepfake creators almost never have consent.
Some cases go further. Where the manipulation relies on facial or vocal characteristics in a way that functions like identification, it can edge into biometric territory. That raises the compliance risk and tends to increase platform responsiveness.
Defamation and unlawful publication: fake “evidence” is still publication
Deepfakes are reputationally dangerous because they mimic proof. Even if the uploader adds “it’s fake,” courts look at context: realism, captions, comment sections, and the foreseeable way the public will read it.
Put differently: if the format is persuasive and the post invites belief, the harm is not cured by a lazy disclaimer in small print.
Personality rights: you are not a reusable asset
There is also a basic point that gets lost: your image and voice are not free raw materials. Using someone’s likeness for humiliation content, sexual content, targeted harassment, or commercial gain is exactly the kind of situation personality-rights reasoning is meant to address.
“It’s satire” and other predictable defences
Expect the free expression argument. Sometimes it is legitimate. Parody exists. Satire exists.
But legality is not decided by the word “joke.” Courts tend to ask what a reasonable viewer would take from the post and whether the harm is disproportionate to any expressive value.
Realism matters. Target matters. Scale matters. Sexual humiliation matters a lot.
If you’re the target: what to do first
Most people react as if this is a social problem only. It is also an evidence problem. Act like it.
- Preserve proof immediately: screen-record the video playing (with captions and comments), save the URL, take screenshots of the account, date/time, and any reposts.
- Report with legal keywords: “non-consensual manipulated media,” “impersonation,” “harassment,” “defamation,” “privacy,” and if relevant, “sexual content.” Platforms route tickets differently depending on the category.
- Send a formal takedown notice: keep it short. State that you are identifiable, you did not consent, and the post is harmful. Attach one screenshot. Ask for removal and for links to be de-indexed where applicable.
- Use data protection leverage: where available, submit an erasure/objection request. Even when platforms argue about controller roles, the paper trail helps and often speeds up moderation.
- Escalate fast if there are threats or sexualised content: treat that as urgent. It moves the case from “internet drama” into safety territory.
- Don’t bargain privately with an attention-seeker: DMs often become more content. Go procedural: document, report, escalate.
A blunt conclusion
Deepfakes are not mainly a celebrity problem. They are an ordinary-person problem. Classmates, ex-partners, coworkers, strangers who want clout. The harm is cheap, fast, and sticky.
The legal bottom line is simple:
If it uses your face or voice, it is not “just content.” If it is presented as real, responsibility becomes easier to attribute. If it triggers harassment or sexual humiliation, the system becomes much less patient with “it was a joke.”
Optional: paste-ready takedown text (short)
Subject: Takedown request — non-consensual manipulated media using my likeness
Hello,
I am identifiable in the attached content (face/voice). I did not consent to this manipulated video being created or posted. It impersonates me and is causing harm (harassment/reputational damage). Please remove the content promptly and take steps to prevent re-uploads. The post URL and screenshots are below.
Thank you.
[Your name]
[Link(s)]
[Screenshot(s)]

Leave a comment