Social media companies consider policies targeting ‘deepfakes’

Facebook, Google and Twitter are all considering writing policies specifically about deepfake videos after lawmakers raised concerns over a doctored video of House Speaker Nancy Pelosi (D-Calif.).

The companies shared those plans to a top Democratic lawmaker in letters this month.

The trio of top social media companies told Rep. Adam Schiff (D-Calif.), the chairman of the House Intelligence Committee, they are all looking into whether it’s necessary to tweak their policies to account for the rise of deepfakes, videos or images that have manipulated to make it appear as though people are doing or saying things that they never have.

The issue clamored into the spotlight earlier this year when Facebook declined to take down a user-posted video of Pelosi that was slowed down and edited to make it appear as though she was sick or drunk. The hundreds of comments on the video indicated viewers thought the video was real. Shortly after, President Trump shared a video that was spliced to make it seem like Pelosi was stumbling over her words.

The videos of Pelosi were not deepfakes, but they reignited a larger conversation about how the social media companies deal with manipulated footage, which is expected to play a growing role in the upcoming presidential election.

Schiff pressed the companies over the issue in letters on June 15. The responses from the companies are dated July 31.

Here’s what each company is saying….

Twitter: Twitter’s director of public policy, Carlos Monje, wrote that “We are carefully investigating how to handle manipulated media and agree that manipulated media can pose serious threats in certain circumstances.”

“The solutions we develop will need to protect the rights of people to engage in parody, satire and political commentary,” he added.

Google: “We are always looking into new potential threats related to personal or societal harm arising from new technologies, including this one, and may further update our policies in the future if we identify gaps that are not currently covered by our existing rules or systems,” Karan Bhatia, Google’s vice president of government affairs and public policy, wrote. He noted that Google-owned YouTube has been working to combat manipulated media since its “early days,” pointing to the platform’s “deceptive practices” policies as an effort to address the issue.

Facebook: And Facebook noted that its CEO Mark Zuckerberg has publicly said the company may craft a new policy for deepfakes specifically.

“We have recently engaged with more than 50 global experts with technical, policy, media, legal and academic backgrounds to inform our policy development process,” Kevin Martin, Facebook’s vice president of U.S. public policy, wrote.