Section 230 and Internet Freedom, a Proposal

, in Rant

There has been talk about Section 230part of the Communications Decency Act of 1996. You can read it here, it is not long. being repealed or replaced. The intent of the law was to protect owners of “Interactive Computer Services” (ie websites, especially those that publish comments) who wished to maintain editorial control over what their users publish.

Under Section 230, a newspaper can delete comments on articles that it deems unacceptable by its own criteria without worrying about being sued. Also, the section talks a lot about blocking software installed on user’s clients - so a site site cannot sue a content blocker for prevent users from seeing certain sites.

Despite what some people say, Section 230 says nothing about providers having to maintain neutrality or any kind of “fairness”. In fact, it is pretty vague about the criteria for restricting access. Nor does it prevent (as some have claimed) sites being sued for the actions of their users.

Nevertheless, depending on who you talk to, Sector 230 is the foundation for the open web we see today. I tend to agree but think that the original intent has been extended and the section is too vague to be useful in today’s world.

This post is my attempt to articulate what I would like to see implemented.

Some Terms

Provider: a website/service that allows comments. Note that this does not include web host/domain name registers, which are not covered by this proposal.

Comment: a piece of content provided by a user of the site that is visible to other users. This may be text, images, or other media. Youtube videos count.

User: an individual using the providers site, possibly leaving comments.

Goals

The web has changed since 1996, with a few huge players welding great power over what people see.

It is a goal that the free speech of individuals must be protected, especially political speech.

It is a goal to maintain and even increase the diversity of discourse on the internet.

It is not a goal of legislate sites out of existence with compliance costs, especially small sites.

The primary goal is to remove some claimed Section 230 protections, potentially making providers and users jointly responsible for (some) content. I argue that these claimed protections never existed in any real sense and my proposal just codifies the existing situation while strengthening protections for smaller providers.

Free speech is important but this does not oblige a provider to actually publish a comment. Nor does it allow a user to defame, harass, shout fire in a crowded theater without consequences. Both the user and provider will have free speech as a defense for any potential legal action.

The main threat to websites is not government actionalthough that cannot be discounted but sue-happy commercial interests with deep pockets. This propose strengthens a providers protections in some cases to promote diversity of discourse, while removing them in others, to promote provider responsibility and clear up unclear law.

Secondly, there is no point increasing compliance costs on small sites or other places where a comment would have very little impact. A harmful comment on a forgotten forum read by 7 people should be treated differently than a harmful social media post with 6 million views. A “best effort” attempt to moderate harmful posts should shield a small site from legal problems.

I propose a legal test to see if the provider is jointly responsible for a comment.

How this will work in practice:

Imagine a harmful comment the defames someone.

For small sites, blogs, forums, and message boards, I do not see anything changing for the worse. If fact, my proposal explicitly gives them a excellent legal defense that currently does not explicitly exist with Section 230. Sites with a certain editorial slant will be more protected - providers can remove comments without losing protection with no expectation of fairness.

Users will still be responsible for legally harmful comments in all cases.

The sites most affected will be the big social media sites like YouTube, X, TikTok and streaming sites. For these sites, any harmful content that they pay for, decide to show unasked for, or that “goes viral” to more than a threshold number of people could attract legal problems. They have the resources to make sure all such content is non-harmful, it is up to them to decide whether to spend the resources on moderating the content or beefing up their legal teams.

There are probably many things wrong with this proposal but I stand by my stated goals of keeping the comments on the web alive while promoting provider responsibility.