| |

Why Section 230 Actually Matters For the Internet as we Know It

I found this article from The Hill to be rather enlightening on the subject of Section 230 of the Communications Decency Act. Because it shows that most of the critics of it, on both sides of the political spectrum, simply don’t understand what it is.

Most of them seem to think it either:

  1. Frees big tech companies form any responsibility for spreading “bad” information. Or
  2. Gives them something to hide behind when they really just want to be censors.

But the fact is, Section 230 was never designed to allow internet companies to necessarily do either of those. And without it, we’d be looking at a whole different internet. So, first, the history from that article:

Starting with earlier technologies like newswire services and radio, courts began to recognize that free speech norms and a need for pragmatic rules should outweigh arguments for holding what are essentially conduits of information liable for that information. One early case found that a radio station should not be subject to strict liability for a host remarking that a certain establishment was a “rotten hotel.” As information technology expanded, so did this norm to include new mediums and address concerns such as newsstands and libraries.

The idea, of course, was that if someone being interviewed on the radio shared an opinion, or lied, it wasn’t the radio station who was strictly liable, it was the person who stated such a thing. Unless there is some proof that the station purposely set out to publicize that information, or slander, or lie, the responsibility for it belonged to the person saying it. Not the medium where they said it. Similarly, if a newsstand or library stocked a publication that was slandering someone, it’s the publisher who is at fault, not the stand that happens to stock that paper.

When the internet came about, this question was back in the courts, and Section 230 was provided as the clarifying answer. Again, if there is an illegal, slanderous, etc. message out there, the liability for that message lies with the person who sent it, not the message board, the email provider, the IM tool, etc.

We really wouldn’t want it any other way:

Today, much of the internet — Facebook feeds, YouTube videos, tweets, Airbnb listings — involves user-generated content. While critics of Section 230 often point to concerns about social media giants like Facebook and Twitter, the law affects a far-broader range of content, including review sites, sharing economy platforms, online dating sites, and even comments on traditional newspapers’ websites.

 

This explosion of new and different uses of the internet is not coincidental and shows why Section 230 provided an important acceleration even if the common law would have eventually arrived at a similar conclusion.

Imagine, if you will, an internet where every single thing uploaded by every single user, was the responsibility of the ISP, the hosting company, the website, the social network, etc. In essence, Twitter could be held liable every single time someone was slandered, or threatened. How do you think Twitter would handle that?

How would you handle it on your own website or message board? Of course, everything would become moderated. Literally, everything. Every single thing you wanted to share over the internet, in a text message, a SnapChat, an email or even a Tweet, would need to be vetted and approved before it was published. Every blog post would need to be vetted. I suppose on my site, WordPress and my hosting company would both have to, probably after my ISP did so first before even sending it out to them.

Yeah, that’d be fun, huh? Yet, for anyone who is calling on abolishing Section 230 to force Facebook and Twitter to clean up their platforms, and be held responsible for things that are shared, this is exactly what that looks like.

Section 230 made it clear that it is the person sending the message that is liable for it’s contents, not the medium, except in some very rare instances. So, it’s Section 230 that actually allows us to talk to one another.

It also says that while platforms don’t HAVE to moderate content, they are free to decide if they want to, and how. If they don’t do it perfectly, so be it. If they don’t do it at all, so be it. If they do it in a way you don’t like, so be it. Because you are free to startup your own internet service and moderate it, or not, the way you see fit. Now, some would argue that those companies are too big, and you can’t possibly compete with them, and they may even have a point about that, but the monopoly argument has nothing to do with Section 230. Abolishing Section 230 won’t make Google and Facebook go away, it will actually make it harder for anyone who isn’t a huge tech giant to compete, because I don’t know about you, but moderating everything takes a lot of time, money and resources. Who has that?

Google, Facebook, Apple, Amazon, maybe a few others.

Certainly not this little corner of the web. So it would probably disappear. As would most of what we now have.

More than that, they would probably still be making huge mistakes in content moderation, because it’s really hard to do at scale, and there’d be nowhere else to turn.

Similar Posts

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.