Big AI won't stop election deepfakes with watermarks

Big AI won’t stop election deepfakes with watermarks

But when it comes to the variety of content that AI can generate and the many models that already exist, things get complicated. Currently, there is no standard for watermarking, which means that each company uses a different method. Dall-E, for example, uses a visible watermark (and a quick Google search will find you plenty of tutorials on how to remove it), while other services may use metadata or pixel-level watermarks by default which are not visible to users. Although some of these methods can be difficult to undo, others, like visual watermarks, can sometimes become ineffective when an image is resized.

“There are many ways to corrupt watermarks,” says Gregory.

The White House statement specifically mentions the use of watermarks for AI-generated audio and visual content, but not for text.

There are ways to watermark text generated by tools like OpenAI’s ChatGPT, by manipulating how words are distributed, making a certain word or set of words appear more frequently. These would be detectable by a machine but not necessarily by a human user.

This means that watermarks would have to be interpreted by a machine and then reported to a viewer or reader. This is made more complex by mixed media content, such as audio, image, video and text elements that can appear in a single TikTok video. For example, someone could put real sound over an image or video that has been manipulated. In this case, platforms would have to figure out how to indicate that a component – ​​but not all – of the clip was AI-generated.

And simply labeling content as AI-generated doesn’t do much to help users determine whether something is malicious, misleading, or intended for entertainment.

“Obviously, manipulated media is not inherently bad if you’re making TikTok videos and they’re meant to be fun and entertaining,” says Hany Farid, a professor at the UC Berkeley School of Information who has worked with the software company Adobe on its content authenticity initiative. “It’s the context that’s really going to matter here. This will continue to be extremely difficult, but platforms have been struggling with these issues for 20 years.

And the growing place of artificial intelligence in the public consciousness has given rise to another form of media manipulation. Just as users may assume that AI-generated content is real, the very existence of synthetic content can sow doubt about the authenticity of the content. any video, image or piece of text, allowing bad actors to claim that even authentic content is fake – the so-called “liar’s dividend”. Gregory says the majority of recent cases Witness has seen are not deepfakes used to spread lies; these are people trying to pass off real media as AI-generated content.

Russian Foreign Ministry suffers “powerful” cyberattack

Russian Foreign Ministry suffers “powerful” cyberattack

'Act Now': ACSC Issues Critical Alert Regarding FortiManager Vulnerability Exploitation

‘Act Now’: ACSC Issues Critical Alert Regarding FortiManager Vulnerability Exploitation

Leave a Reply

Your email address will not be published. Required fields are marked *