07 January 2020

Facebook bans deceptive deepfakes and some misleadingly modified media


Facebook wants to be the arbiter of truth after all. At least when it comes to intentionally misleading deepfakes and heavily manipulated and/or synthesized media content, such as AI-generated photorealistic human faces that look like real people but aren’t.

In a policy update announced late yesterday, the social network’s VP of global policy management, Monika Bickert, writes that it will take a stricter line on manipulated media content from here on in — removing content that’s been edited or synthesized “in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say”.

However edits for quality or cuts and splices to videos that simply curtail or change the order of words are not covered by the ban.

Which means that disingenuous doctoring — such as this example from the recent UK General Election (where campaign staff for one political party edited a video of a politician from a rival party who was being asked a question about brexit to make it look like he was lost for words when in fact he wasn’t) — will go entirely untouched by the new ‘tougher’ policy. Ergo there’s little to trouble Internet-savvy political ‘truth’ spinners here. The disingenuousness digital campaigning can go on.

Instead of grappling with that sort of subtle political fakery, Facebook is focusing on quick PR wins — around the most obviously inauthentic stuff where it won’t risk accusations of partisan bias if it pulls bogus content.

Hence the new policy bans deepfake content that involves the use of AI technologies to “merge, replace or superimpose content onto a video, making it appear to be authentic” — which looks as if it will capture the crudest stuff, such as revenge deepfake porn which superimposes a real person’s face onto an adult performer’s body (albeit nudity is already banned on Facebook’s platform).

It’s not a blanket ban on deepfakes either, though — with some big carve outs for “parody or satire”.

So it’s a bit of an open question whether this deepfake video of Mark Zuckerberg, which went viral last summer — seemingly showing the Facebook founder speaking like a megalomaniac — would stay up or not under the new policy. The video’s creators, a pair of artists, described the work as satire so such stuff should survive the ban. (Facebook did also leave it up at the time.)

But, in future, deepfake creators are likely to further push the line to see what they can get away with under the new policy.

The social network’s controversial policy of letting politicians lie in ads also means it could, technically, still give pure political deepfakes a pass — i.e. if a political advertiser was paying it to run purely bogus content as an ad. Though it would be a pretty bold politician to try that.

More likely there’s more mileage for political campaigns and opinion influencers to keep on with more subtle manipulations. Such as the doctored video of House speaker Nancy Pelosi that went viral on Facebook last year, which had slowed down audio that made her sound drunk or ill. The Washington Post suggests that video — while clearly potentially misleading — still wouldn’t qualify to be taken down under Facebook’s new ‘tougher’ manipulated media policy.

Bickert’s blog post stipulates that manipulated content which doesn’t meet Facebook’s new standard for removal may still be reviewed by the independent third party fact-checkers Facebook relies upon for the lion’s share of ‘truth sifting’ on its platform — and who may still rate such content as ‘false’ or ‘partly false’. But she emphasizes it will continue to allow this type of bogus content to circulate (while potentially reducing its distribution), claiming such labelled fakes provide helpful context.

So Facebook’s updated position on manipulated media sums to ‘no to malicious deepfakes but spindoctors please carry on’.

“If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false,” Bickert writes, claiming: “This approach is critical to our strategy and one we heard specifically from our conversations with experts.

“If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we’re providing people with important information and context.”

Last month Facebook announced it had unearthed a network of more than 900 fake accounts that had been spreading pro-Trump messaging — some of which had used false profile photos generated by AI.

The dystopian development provides another motivation for the tech giant to ban ‘pure’ AI fakes, given the technology risks supercharging its fake accounts problem. (And, well, that could be bad for business.)

“Our teams continue to proactively hunt for fake accounts and other coordinated inauthentic behavior,” suggests Bickert, arguing that: “Our enforcement strategy against misleading manipulated media also benefits from our efforts to root out the people behind these efforts.”

While still relatively nascent as a technology, deepfakes have shown themselves to be catnip to the media which loves the spectacle they create. As a result, the tech has landed unusually quickly on legislators’ radars as a disinformation risk — California implemented a ban on political deepfakes around elections this fall, for example — so Facebook is likely hoping to score some quick and easy political points by moving in step with legislators even as it applies its own version of a ban.

Bickert’s blog post also fishes for further points, noting Facebook’s involvement in a Deep Fake Detection Challenge which was announced last fall — “to produce more research and open source tools to detect deepfakes”.

While says Facebook has been working with news agency Reuters to offer free online training courses for journalists to help reporters identify manipulated visuals.

“As these partnerships and our own insights evolve, so too will our policies toward manipulated media. In the meantime, we’re committed to investing within Facebook and working with other stakeholders in this area to find solutions with real impact,” she adds.


Read Full Article

No comments:

Post a Comment