There has been talk about Section 230part of the Communications Decency Act of 1996. You can read it here, it is not long. being repealed or replaced. The intent of the law was to protect owners of “Interactive Computer Services” (ie websites, especially those that publish comments) who wished to maintain editorial control over what their users publish.
Under Section 230, a newspaper can delete comments on articles that it deems unacceptable by its own criteria without worrying about being sued. Also, the section talks a lot about blocking software installed on user’s clients - so a site site cannot sue a content blocker for prevent users from seeing certain sites.
Despite what some people say, Section 230 says nothing about providers having to maintain neutrality or any kind of “fairness”. In fact, it is pretty vague about the criteria for restricting access. Nor does it prevent (as some have claimed) sites being sued for the actions of their users.
Nevertheless, depending on who you talk to, Sector 230 is the foundation for the open web we see today. I tend to agree but think that the original intent has been extended and the section is too vague to be useful in today’s world.
This post is my attempt to articulate what I would like to see implemented.
Some Terms
Provider: a website/service that allows comments. Note that this does not include web host/domain name registers, which are not covered by this proposal.
Comment: a piece of content provided by a user of the site that is visible to other users. This may be text, images, or other media. Youtube videos count.
User: an individual using the providers site, possibly leaving comments.
Goals
The web has changed since 1996, with a few huge players welding great power over what people see.
It is a goal that the free speech of individuals must be protected, especially political speech.
It is a goal to maintain and even increase the diversity of discourse on the internet.
It is not a goal of legislate sites out of existence with compliance costs, especially small sites.
The primary goal is to remove some claimed Section 230 protections, potentially making providers and users jointly responsible for (some) content. I argue that these claimed protections never existed in any real sense and my proposal just codifies the existing situation while strengthening protections for smaller providers.
Free speech is important but this does not oblige a provider to actually publish a comment. Nor does it allow a user to defame, harass, shout fire in a crowded theater without consequences. Both the user and provider will have free speech as a defense for any potential legal action.
The main threat to websites is not government actionalthough that cannot be discounted but sue-happy commercial interests with deep pockets. This propose strengthens a providers protections in some cases to promote diversity of discourse, while removing them in others, to promote provider responsibility and clear up unclear law.
Secondly, there is no point increasing compliance costs on small sites or other places where a comment would have very little impact. A harmful comment on a forgotten forum read by 7 people should be treated differently than a harmful social media post with 6 million views. A “best effort” attempt to moderate harmful posts should shield a small site from legal problems.
I propose a legal test to see if the provider is jointly responsible for a comment.
- Did the provider show the comment to more than a certain threshold of people. Say 50000 per 12 months. Up until that limit the provider has shielding if it can be shown that they made a “best effort” attempt to limit damage. Passed the threshold attaches joint responsibility.
- Any provider that has a commercial relationship with the commenter is responsible for the content. Either they are paying the commenter or the commenter is paying them. Perhaps thresholds should exist on the later so that sites that survive from donations from users are not effected. Lets say the limit here is $200 a year in either direction. Advertising counts as a commercial comment.
Note that that is not per comment - $200 total. None of this demonetizing a particular video - if the provider is paying the commenter for any reason then they are jointly responsible for that commenter’s content. - A distinction will be made between comments that the user has asked to see by browsing to a page that happens to contain them (think an article or blog post with comments at the bottom) versus comments that were surfaced by the provider without the user’s explicit action (think tweets in your timeline from users you have not followedDue to the infamous "algorithm", which is just a scary word for editorial fiat). The provider chose to show the later content and becomes jointly liable if found harmful.
Note that this does not prevent the provider from moderating comments or force any kind of “fairness”. The provider’s action to not show certain comments will not by itself make them liable for the comments they do show. Conversely, the provider’s action to show a unrequested comment will automatically make them jointly liable. - Loopholes will always exist but providers restructuring sites to avoid thresholds should be disallowed. No breaking a site into thousands of subdomains to maintain a low user count. No user site terms and conditions claim that the user explicit asked for algorithmic content. No claiming that the user asked to see all the comments in the world and the provider merely filtered them down to this one.
How this will work in practice:
Imagine a harmful comment the defames someone.
- The comment is posted to a small forum with a few thousand users. The forum removes the comment after a few days. Little damage, the forum is protected.
- The comment is posted to social media and seen by the users friends (say 200 views). Little damage - the forum is protected.
- The comment is posted to social media and gains enough traction that the site decides to add it to their “People are currently talking about…” public feeds. This decision exposes the forum to legal action.
- A user makes a video that gets a few thousand views. Little damage - the site is protected so long as they “best effort” take it down if asked.
- A user with a series of videos that the provider pays for makes the harmful comment. The site has joint liability for the actions of what amounts to a contractor.
For small sites, blogs, forums, and message boards, I do not see anything changing for the worse. If fact, my proposal explicitly gives them a excellent legal defense that currently does not explicitly exist with Section 230. Sites with a certain editorial slant will be more protected - providers can remove comments without losing protection with no expectation of fairness.
Users will still be responsible for legally harmful comments in all cases.
The sites most affected will be the big social media sites like YouTube, X, TikTok and streaming sites. For these sites, any harmful content that they pay for, decide to show unasked for, or that “goes viral” to more than a threshold number of people could attract legal problems. They have the resources to make sure all such content is non-harmful, it is up to them to decide whether to spend the resources on moderating the content or beefing up their legal teams.
There are probably many things wrong with this proposal but I stand by my stated goals of keeping the comments on the web alive while promoting provider responsibility.