“This sounds as though it’s modeled on similar arrangements around child pornography. Except that there are some major differences between child pornography and “terrorist content.” The first is that child porn is de facto illegal. “Terrorist content” is quite frequently perfectly legal. It’s also much more of a judgment call. And based on this setup, allowing one platform partner to designate certain content as “bad” will almost certainly result in false positive designations that will flow across multiple platforms. That’s dangerous.
As we’ve discussed in the past, when you tell platforms to block “terrorist” content, it will frequently lead to mistakes, like blocking humanitarians documenting war atrocities. That kind of information is not just valuable, but necessary in understanding what’s happening. “
It is becoming very trendy to suggest that these social platforms must “do something” to prevent people from having, in essence, to see information they might not want to see, or they may not want others to see. Whether you are talking about “terrorist” content, hate speech or “fake” news, the question always comes back the same thing. Who decides what is appropriate and what isn’t, and what basis are they using for that decision? Sure, we can maybe find some obvious stuff that we can get agreement on, but eventually there’s going to be disagreement, and then what? How do I get my content put back if it gets marked as any of those things?
Who’s watching to make sure “safe” social networks don’t become completely void of free speech?