- Meta; Facebook announced that it will detect and tag content produced by artificial intelligence on Instagram and Threads.
- It was emphasized that Meta is working with industry organizations such as Partnership on AI (PAI) to develop common standards for defining AI-generated content.
- The company stated that it will continue to observe and learn, collaborate with the industry and engage in dialogue with governments and civil society.
Meta published a new report this morning. in writing Facebook announced that it will detect and tag content produced by artificial intelligence on Instagram and Threads. However, he warned that “it is not yet possible to detect all content produced by artificial intelligence.”
The announcement comes two weeks after pornographic AI-generated deepfakes of singer Taylor Swift went viral on Twitter, sparking condemnation from fans and lawmakers and global headlines. Meta is also under pressure to deal with AI-generated images and doctored videos ahead of the 2024 US elections.
Meta’s head of global relations, Nick Clegg, stated that it is too early for content produced by artificial intelligence to become widespread, and that as these contents become widespread, there will be discussions throughout society about what should and should not be done to determine synthetic and non-synthetic content. The company stated that it will continue to observe and learn, collaborate with the industry and engage in dialogue with governments and civil society.
The article emphasized that Meta is working with industry organizations such as the Partnership on AI (PAI) to develop common standards for defining AI-generated content. It was stated that the watermarks used for Meta AI images are compatible with the most common implementations of PAI.
Meta said it will tag images that users post to Facebook, Instagram, and Threads when we can detect industry-standard indicators that they were produced by AI.
Clegg wrote that Meta’s approach “represents the cutting edge of what is currently technically possible,” adding: “We are working hard to develop classifiers that can help us automatically detect AI-generated content, even if there are no invisible marks in the content.” “We are also looking for ways to make it more difficult to remove or replace invisible watermarks.”
In July 2023, seven companies promised President Biden that they would take concrete steps to improve AI security, including watermarking. In August, Google DeepMind released a beta version of a new watermarking tool called SynthID, which embeds a digital watermark directly into the pixels of an image, making it imperceptible to the human eye but detectable for identification.
But so far, digital watermarks, whether visible or invisible, aren’t enough to stop bad actors. in October wired“We don’t have a reliable watermark at this point, we’ve broken them all,” University of Maryland Computer Science Professor Soheil Feizi was quoted as saying. Feizi and his fellow researchers examined how easy it was for malicious actors to evade watermarking attempts.
Clegg noted that this application is limited to images, and that AI tools that produce audio and video do not currently include those flags, but the company will allow people to surface and tag that content when it is published online.
Compiled by: Esin Özcan