Facebook Takes on Revenge Porn

Facebook announced this week that it will expand its quest to solve for “revenge porn” — the act of sharing sexually explicit images of someone online without their consent — by asking users to voluntarily submit their own racy photos.
Sounds legit, right?
It actually could work, according to tech leaders and activists who argue that in order to combat unauthorized sharing of photos, there needs to be a tech-driven solution.  
“It’s demeaning and devastating when someone’s intimate images are shared without their permission, and we want to do everything we can to help victims of this abuse,” said Antigone Davis, global head of safety for Facebook, in a post on the site.
Last year, the social media giant released a set of tools that, on the back end, uses photo-matching technology to stop an intimate image or video from being posted again once it’s already been reported and removed from Facebook-owned platforms, including Messenger and Instagram.
But that system is reactionary: an image has to already be published for all the world to see before it gets taken down.
“Even if these platforms are willing to take [the material] down, we know it can be very distressing for someone to have these images seen by their employer or their family members or peers,” says Erica Olsen, director of the Safety Net Project for the National Network to End Domestic Violence (NNEDV). “That reactive measure of taking them down after they’ve already been online — the damage is kind of done, in some way.”
In response, Facebook is taking a tip from survivors’ own proactive measures by using copyright law to block unauthorized images — in some cases, before they can even go up.
Olsen says that survivors have been known to take all the images or videos in their possession and copyright them. That way, if a photo or video is uploaded to a site that has a copyright firewall, the image is immediately flagged and removed. For sites that don’t have those protections in place, copyright law supersedes, and legally the images must be taken down.

Facebook Revenge Porn 2
Facebook’s new tool lets users upload sensitive images into a secure database to prevent them from being shared widely.

With the new protocol, people can upload intimate images of themselves, privately, to ensure that they can’t be uploaded by anyone else. Facebook has partnered with several safety organizations, including NNEDV, who will facilitate the process in which users are provided a one-time link to submit any images of themselves that they do not want shared online. The images are reviewed and given a unique hashtag that can identify attempts to upload the same material in the future without having to store the original on Facebook’s servers.
In this way, sensitive and damaging photos and videos are preemptively flagged and blocked from ever being seen by the public at large.
Revenge porn has become a widespread problem. According to a 2016 study by the Data & Society Research Institute, more than 10 million Americans have either had someone threaten to post lewd images of them, or have been the victim of such images being shared online without their consent.
Amanda Lewandowski, a clinical teaching fellow at the Technology Law and Policy Clinic at New York University, has written extensively on the topic of how copyright law can be an effective — and proactive — solution to fighting revenge porn.
“Because an estimated 80 percent of revenge porn images are selfies, meaning that the subject and the photographer are one in the same, the vast majority of victims can use copyright law to protect themselves,” she says.
Olsen has seen the tactic work. “In many cases, survivors have learned that copyright laws [make it] a little bit easier to go after sites and get images taken down,” she says.
Now, with the roll out of Facebook’s new proactive self-reporting program, victims and survivors have one more tool in their arsenal to fight back against online harassment.

Editor’s note: This story has been updated to reflect the fact that Facebook provides users with a one-time link to upload photos, not partner organizations.

Meet the Privacy Expert on a Mission to Protect Your Digital Footprint

In the immediate aftermath of the 9/11 terrorist attacks, a new breed of nationalism took root that trained its attention on the foreigners among us. In response, the federal government adopted a set of strict policies and legislation that tracked immigrants in general and Muslim communities in particular.
“I felt like the whole country was in turmoil and at risk of abandoning its values for a false sense of security,” says Tim Sparapani, an expert in digital privacy and a NationSwell Council member. “I was always taught at moments like that you don’t look away; you get involved.”
So Sparapani did, finding his passion for social impact and public service within those tumultuous days. He joined the American Civil Liberties Union as senior legislative counsel and later helped establish Facebook’s presence in Washington as its first director of public policy. These days, the D.C.-based Sparapani leads SPQR Strategies, which he founded in 2011 as a consulting firm focused on online and digital data privacy.
It was at the ACLU that Sparapani gained his reputation as a fierce advocate for individual privacy, becoming a protector against what he says was unconstitutional policies. That included the Real ID Act of 2005, a significant piece of 9/11 legislation introduced and championed by Rep. James Sensenbrenner (R-Wisc.), which required people who applied for a driver’s license or a government ID to produce five types of identification to prove their identity, such as a social security number, birth certificate, proof of citizenship and home address, and a mortgage statement or utility bill.
Democrats and the ACLU, along with moderate Republicans and a handful of libertarian organizations like the CATO Institute, thought the statute was “deeply unconstitutional,” says Sparapani. “Once you pulled back the layers, you saw it was based on nativism and ugly xenophobia.”
After the bill passed, Sparapani and his team at the ACLU spearheaded a campaign that urged states to resist the federal regulations. They made their push to the public by highlighting how the new driver’s licenses mandated under the bill — which would have electronic chips that stored a person’s name, address, birth date and social security number — were prone to identity theft, could be used to track individuals’ travel, and would cost taxpayers billions of dollars.
“We were able to get dozens of states to independently enact legislation resisting the federal statute. That hasn’t happened since the Civil War,” Sparapani says. “It was our strategy to have state-by-state resistance to something that was tremendous overreach.”
Though the Real ID Act is still enshrined in federal law and, starting next year, will bar certain state IDs from being used to fly or gain access to federal buildings, Sparapani credits the campaign as his “a-ha moment,” when he realized there was a need to protect all U.S. residents’ privacy, especially from a government that he saw as wielding too much power.
“There was this new opportunity in the computer-database era for the government to exercise control over people in all sorts of nefarious ways by using technology for ill,” Sparapani says, adding that he’d like to see more people take up the cause for privacy rights online. “It’s kind of up to all of us to decide the rules for how we use technology as a society and put limits on it that are aligned with our constitutional values.”


Tim Sparapani is a NationSwell Council member and the founder of SPQR Strategies, a consulting firm that works with startups, established companies, and consumer and privacy advocates on the policy challenges raised by emerging technologies.