In February, this fake video of Brad Pitt and Tom Cruise sent Hollywood into a frenzy.
The now widespread phenomenon of deepfakes poses problems for Hollywood stars. YouTube now wants to introduce a detection and reporting system against fraud using artificial intelligence.
May 9, 2026, 6:54 p.mMay 9, 2026, 6:54 p.m
Anuj CHOPRA, new york / afp
YouTube wants to further advance the fight against identity theft through artificial intelligence. Last month, Google’s video platform launched a privacy protection tool that identifies content in which faces have been altered or generated using AI technologies. The project initially targeted government officials, journalists and other political actors.
Actresses and musicians can now also access this service through their agencies and managers. The tool allows users to «Search for AI-generated content that imitates a user’s own appearance, such as deepfakes of their face, and to request their removal”. Celebrities and artists do not have to have their own YouTube account.
translation
This text was written by our colleagues from French-speaking Switzerland and we translated it for you.
Alon Yamin, CEO and co-founder of Copyleaks, an AI content detection platform, believes: “The fact that YouTube is opening its deepfake detection capabilities to public figures marks a turning point in how platforms approach identity protection in the age of generative AI.” He adds:
«The technology that allows us to mimic a person’s face, voice and behavior has evolved faster than the corresponding security measures. This has created a gap that malicious actors are already exploiting.”
The initiative comes at a time when hyper-realistic videos of deceased celebrities created using AI software such as Sora – a tool from OpenAI – are increasingly emerging. Sora sparked a flood of videos from Michael Jackson and Elvis Presley, prompting OpenAI to respond last month Operation of the tool stopped.
Last February, Irish director Ruairí Robinson created a stunningly realistic video showing Brad Pitt and Tom Cruise fighting on a roof – the prompt for this was just two sentences long.
The widely shared video, which caused widespread concern in Hollywood, was created using Seedance 2.0, a tool from Chinese company ByteDance. Robinson also produced other videos. In one, Brad Pitt faces a sword-wielding zombie ninja; in another, he fights a robot alongside the ever-present Tom Cruise.
This was a 2 line promptly in seedance 2. If the hollywood is cooked guys are right maybe the hollywood is cooked guys are cooked too idk. pic.twitter.com/dNTyLUIwAV
— Ruairi Robinson (@RuairiRobinson) February 11, 2026
Charles Rivkin, chairman of the Motion Picture Association – the association of major American production companies – called on ByteDance to “immediately” stop its “counterfeiting activities”. He also accused the company of violating copyright law.
For its part, YouTube explains that it is working with major artist agencies to improve the detection of problematic images and better protect artists.
The platform does “what is necessary by giving artists these tools for free so that they can protect their legacy,” says Jason Newman from the management and production company Untitled Entertainment. In an interview with the Hollywood Reporter he adds:
“Their legacy is their face, their body, who they are, what they do, their way of expressing themselves.”
The tool was developed after complaints from prominent US figures who criticized YouTube’s cumbersome process for reporting and removing deepfakes. Alon Yamin explains:
“The stakes are particularly high because deepfakes can be used to spread misinformation, manipulate markets, damage reputations or feign support for a particular cause. Reliable detection is therefore no longer optional.” He further emphasizes:
“Detection systems must be extremely precise, kept up to date and accompanied by clear rules and rapid removal procedures in order to be effective.”
The head of Copyleaks argues: “This will not completely eliminate deepfakes, but it can significantly limit their reach and impact by making it more difficult for manipulated content to spread without being detected or objected to.”
(afp)