Protecting our users and society: guarding against terrorist content
Published on May 02, 2019
Dropbox helps people work better together by giving them easy-to-use tools to manage and share their content, and collaborate with their teams. Unfortunately, the reality is that a small minority may use services like ours to distribute harmful content. To combat this type of abuse, Dropbox is joining the Global Internet Forum for Counter Terrorism (GIFCT). We strongly support the Forum’s mission to disrupt the spread of content that promotes terrorism.
Joining GIFCT is the latest in a series of steps we’ve taken to fight harmful content on our platform. From our founding we’ve made it clear that harmful content has no place on our services. Since 2010 we’ve not only taken down illegal or harmful content identified on Dropbox, but also prevented any Dropbox user from sharing links to that content. And we’ve included content removal requests in our annual transparency reports on government data requests since 2015.
In 2018 we deepened our efforts to tackle the threat of terrorist content on our platform. We tested and then implemented a trusted flagger program. This enables us to prioritize content removal referrals from organizations, such as countries’ internet referral units, once we’ve established a high degree of accuracy that the content they refer to us is harmful. We would like to thank the Europol Internet Referral Unit in particular for their help and advice in establishing this program. We also entered into URL sharing agreements with social media companies, including Twitter, so we can prioritize removal of material hosted on Dropbox that’s been widely shared on their platforms. Dropbox participates in the EU Internet Forum to discuss ways to reduce the spread of terrorist content with European policymakers, along with other public and private sector organizations addressing this challenge.
Continuing to protect Dropbox and our users against harmful content is a priority for us. But so is protecting the privacy of the overwhelming majority of our users who have nothing to do with this kind of activity. We design our programs to ensure that privacy rights are considered carefully in every step of the process. It’s also important to guard against removal requests that may be erroneous or seek to silence legitimate speech. This is why we implemented an easy-to-use appeals process for terrorist content takedowns alongside our trusted flagger program.
One notable impact of the measures we deployed in 2018 has been a significant reduction in the average time it takes us to remove a piece of terrorist content. We’ll continue this work in 2019 by expanding our trusted flagger program and URL sharing agreements to include new partners, and investigating the potential of emerging technologies to help us identify and take down harmful content more quickly.
Stopping the spread of terrorist content online is a big job, and we are committed to continue working with partners in the public and private sectors to help protect our users and society from this threat.