In its latest bid to curb unauthorized AI-generated deepfakes, Google is taking new steps to remove and demote websites in searches that have been reported to contain illicit images, the technology and search giant said on Wednesday.
An AI deepfake is media created using generative AI to produce videos, pictures, or audio clips that appear real. Many of these fake images depict celebrities like actress Scarlett Johansson, politicians like U.S. President Joe Biden, and, more insidiously, children.
“For many years, people have been able to request the removal of non-consensual fake explicit imagery from Search under our policies,” Google said in a blog post. “We’ve now developed systems to make the process easier, helping people address this issue at scale.”
Such reports, a Google spokesperson further explained to Decrypt, will affect the visibility of a site in its search results.
“If we receive a high volume of removal requests from a site, under this policy, that’s going to be used as a signal to our ranking systems that that site is not a high-quality site—we’ll incorporate that in our ranking system to demote the site,” the spokesperson said. “Broadly speaking, that’s not the only way that we can go about limiting the visibility of that content in search.”
With Google’s new update, when a request to remove non-consensual deepfake websites found in a search is received, Google will also work to filter similar search results that include the name of the person being impersonated.
“What that means is that when you remove a result from search under our policies, in addition, what we’ll do is on any query that includes your name—or would be likely to surface that page from search—all explicit results will be filtered,” the spokesperson said. “So not all explicit results will be removed, but all explicit results will be filtered on those searches, which prevents them from appearing on searches where it would be likely to show up.”
In addition to filtering its search results, Google said it will demote sites that have received a “high volume of removals for fake explicit imagery.”
“These protections have already proven to be successful in addressing other types of non-consensual imagery, and we’ve now built the same capabilities for fake explicit images as well,” Google said. “These efforts are designed to give people added peace of mind, especially if they’re concerned about similar content about them popping up in the future.”
A challenge of the new policy, Google acknowledged, is making sure that consensual or “real content,” like nude scenes in a film, are not taken down along with the illegal AI deepfakes.
“While differentiating between this content is a technical challenge for search engines, we’re making ongoing improvements to better surface legitimate content and downrank explicit fake content,” Google said. In regards to CSAM, the Google spokesperson said the company takes this subject very seriously and has dedicated an entire team specially to combat this illegal content.
“We have hashing technologies, where we have the ability technologically to detect CSAM proactively,” the spokesperson said. “That’s something that’s sort of an industry-wide standard, and we’re able to block it from appearing in search.”
In April, Google joined Meta, OpenAI, and other generative AI developers in pledging to enforce guardrails that would keep their respective AI models from generating child sexual abuse material (CSAM).
As Google works to remove and make deepfake websites harder to find, deepfake experts like Ben Clayton, CEO of audio forensics firm Media Medic, say the threat will remain as technology evolves.
“Combating deepfakes is a moving target,” Clayton told Decrypt. “While Google’s update is positive, it requires ongoing vigilance and improvements to its algorithms to prevent the spread of harmful content. Balancing this with the need for free expression is tricky, but it’s essential to protect vulnerable groups.”
Clayton said that while deep fakes impact privacy and security, the technology can also have implications in legal cases.
“Deepfakes could be used to fabricate evidence or mislead investigations, which is a serious concern for our legal clients,” he said. “The potential for deepfakes to interfere with justice is a critical issue, highlighting the importance of advanced detection technologies and ethical standards in media.”
Policymakers have also taken steps to combat deepfakes. In July, Sen. Maria Cantwell, D-Wash., introduced the Content Origin Protection and Integrity from Edited and Deepfaked Media (COPIED) Act, which called for a standardized method of watermarking AI-generated content.
“Everyone deserves the right to own and protect their voice and likeness, no matter if you’re Taylor Swift or anyone else,” Sen. Chris Coons, D-Del., said in a statement. “Generative AI can be used as a tool to foster creativity, but that can’t come at the expense of the unauthorized exploitation of anyone’s voice or likeness.”
Entertainment industry leaders and technology companies celebrated Google’s update to its policy.
“The No Fakes Act is supported by the entire entertainment industry landscape, from studios and major record labels to unions and artist advocacy groups,” SAG-AFTRA said in a statement applauding the measure. “It is a milestone achievement to bring all these groups together for the same urgent goal.”
“Game over, A.I. fraudsters,” SAG-AFTRA President Fran Drescher added. “Enshrining protections against unauthorized digital replicas as a federal intellectual property right will keep us all protected in this brave new world.”
Edited by Ryan Ozawa.