TikTok becomes first platform to require watermaking of AI content
TikTok intends to begin labelling AI-generated images and videos uploaded to its video-sharing service. "TikTok is starting to automatically label AI-generated content (AIGC) when it's uploaded from certain other platforms. To do this, we're partnering with the Coalition for Content Provenance and Authenticity (C2PA) and becoming the first video sharing platform to implement their Content Credentials technology," revealed the company. The Chinese short form video platform said it plans to extend the feature to audio-only content "soon." TikTok already labels AI-generated content made in-app and requires realistic AI to be labelled as such by creators. To what degree the latter is effective, is debatable. Content Credentials was created by C2PA, which is co-founded by Arm, BBC, Intel, Microsoft, Truepic, Adobe, and Microsoft. Its goal is to form an open, royalty-free technical standard to fight against disinformation. The technology works as a watermark by attaching metadata to content, which TikTok can use to instantly recognize and label as AIGC. "They tell you who made it, when it was made, edits that were made and whether AI was used or not," explained Adobe chief trust officer Dana Rao in an TV interview. He compared it to a nutrition label for content.

The age of deepfakes has arrived

There is growing concern globally regarding human ability to decipher deepfakes – whether its remote IT worker job applicants, scammers out for money or pornography. Just this week, the internet was entertained by a stream of AI generated images of celebrities at the Met Gala who were not in attendance. The fakes were so realistic that even the mom of pop star Katy Perry was fooled. "The AI generated fake photos from the Met Gala are a low-stakes prelude for what's going to happen between now and the elections," observed one US-based individual. Microsoft Threat Analysis Center manager Clint Watts warned last month that deepfake election subversion is disturbingly easy. Microsoft should know, its VASA-1 tool is consdered too dangerous to be relased due to ethical considerations. It is a scenario being played out in India right now where AI deepfakes of Bollywood stars support political parties and level criticism against the backdrop of an election that will determine the fate of current prime minister Nahendra Modi. OpenAI, meanwhile, released model safety guidance earlier this week while acknowledging that it's looking into how to support the creation of content that's NSFW or "not safe for work."

Government intervention

Concerns about the images and videos used to replicate both Bollywood actors and lawmakers sparked India's Ministry of Electronics and IT (MeitY) to issue an advisory last fall stating social media companies need to remove deepfakes from their platforms within 36 hours after they're reported. Failure to act would mean an organization would be held liable for third-party information hosted on platforms. Meanwhile US entrepreneur Cassey Ho recently found herself in the middle of a TikTok deepfake nightmare after one of her clothing designs went viral. She found images of her body superimposed with a different face in videos on TikTok, posted by counterfeiters of her skirt design who needed promotional content. She described it as feeling like she was "in an episode of Black Mirror," and urged her followers to report the incident. "Your use of the report button is just as strong as mine. Any power we may be able to have is going to be our strength in numbers," implored Ho. "Honestly, it's time for the Department of Commerce to really crack down on counterfeits," said one fed up follower. The US Department of Commerce requested [PDF] an additional $62.1 million in fiscal year 2025 "to safeguard, regulate, and promote AI, including protecting the American public against its societal risks." In her testimony defending the budget before the House Appropriations Committee, United States Secretary of Commerce Gina Raimondo said those funds would go toward the AI Safety Institute. "Everybody including myself is worried about synthetic content so we want companies to watermark what's AI generated. Well, what's adequate watermarking? What's adequate red teaming? We're going to build a team - the AI safety Institute - to develop standards so that Americans can be safe," she explained. "We're also investing in scientists and we're investing in policy people at [the National Telecommunications and Information Administration (NTIA)] to help us develop policies for AI," she added.

Watermarks not foolproof

Unfortunately, watermarking may not be the savior it has been billed as. A team at the University of Maryland in the US looked into the reliability of watermarking techniques for digital images, and found they were not that robust. The researchers developed an attack to break watermarks and were successful at toppling every existing one they encountered. "Similar to some other problems in computer vision (eg, adversarial robustness), we believe image watermarking will be a race between defenses and attacks in the future," said the boffins. ® [ad]
文章来源:The Register

TKFFF公众号

扫码关注领【TK运营地图】

TKFFF合作,请扫码联系!

文章来源: 文章该内容为作者观点,TKFFF仅提供信息存储空间服务,不代表TKFFF的观点或立场。版权归原作者所有,未经允许不得转载。对于因本网站图片、内容所引起的纠纷、损失等,TKFFF均不承担侵权行为的连带责任。如发现本站文章存在版权问题,请联系:1280199022@qq.com
文章标签:
跨境头条外网TK资讯
分享给好友:
TKFFF
已认证
0
粉丝数
0
文章数
TKFFF(TK发发发)是为全球TT卖家提供TIKTOK运营所需各种资源的综合性门户网站。网站涵盖TK工具、头条、论坛、社群、活动、人脉、货盘、教学等必备资源。
加微信
导航
资讯
活动