Section 230 of the Communications Decency Act has been in the news a lot lately. Conservatives hate it!
As I told @jack, by labeling posts, @twitter is taking a policy position. When taking a policy position, you’re acting as a publisher (even under current law). #BigTech can’t pretend to not be a publisher & get special benefits under #Section230.https://t.co/I3yebollh1
— Senator Ted Cruz (@SenTedCruz) December 7, 2020
It is now broadly recognized that Joe Biden doesn’t like Section 230 and has repeatedly shown he doesn’t understand what it does. Multiple people keep insisting to me, however, that once he becomes president, his actual tech policy experts will understand the law better, and move Biden away from his nonsensical claim that he wishes to “repeal” the law.
In a move that is not very encouraging, Biden’s top tech policy advisor, Bruce Reed, along with Common Sense Media’s Jim Steyer, have published a bizarre and misleading “but think of the children!” attack on Section 230 that misunderstands the law, misunderstands how it impacts kids, and which suggests incredibly dangerous changes to Section 230. If this is the kind of policy recommendations we’re to expect over the next four years, the need to defend Section 230 is going to remain pretty much the same as it’s been over the last few years.
Well… not all liberals.
Just look at the #BlackLivesMatter movement. So many cases of unjust use of force against Black Americans have come to light via videos on social media. Not a single #MeToo post accusing powerful people of wrongdoing would be allowed on a moderated platform without 230.
— Ron Wyden (@RonWyden) December 11, 2020
Jeez, this must be a really complicated law if nobody can even agree on what it says. What’s that? It’s not? This is the only part that isn’t basically statements of principles, a glossary, or footnotes?
(c) Protection for “Good Samaritan” blocking and screening of offensive material
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).[1](d) Obligations of interactive computer service
A provider of interactive computer service shall, at the time of entering an agreement with a customer for the provision of interactive computer service and in a manner deemed appropriate by the provider, notify such customer that parental control protections (such as computer hardware, software, or filtering services) are commercially available that may assist the customer in limiting access to material that is harmful to minors. Such notice shall identify, or provide the customer with access to information identifying, current providers of such protections.
People are wrong about this law in myriad ways. The most popular misconception seems to be that websites must act as “platforms, not publishers” if they want this protection; making editorial decisions, this argument goes, turns them into publishers. You will notice that this does not appear in the law.
It is very, very straightforward. A website–ANY website–which allows user-generated content is (broadly speaking) not liable for that content and can moderate it however the hell they want. In other words, Section 230 grants the people who run any website the right, under the first amendment, to control what happens on their own property, and shields them from liability should a user, without the site’s knowledge, post illegal content.
Many of the suggested “reforms”, such as those by butthurt conservatives, are offered in bad faith. But even legislators we like more, such as Brian Schatz (D-HI), are introducing bills that would compel censorship and lead to regulatory capture by the largest businesses, like Facebook and Twitter. (Do you want Facebook stamping on a human face forever? Because this is how you get it.)
What would a world without Section 230 look like? We actually have a great case study. In 2018, a package known as FOSTA-SESTA was signed into law, with the stated intent of cutting down on online sex trafficking. It removes Section 230 protections for user-posted content found to be promoting “sex trafficking”. This opened websites up to huge potential liabilities, mostly around advertisements for sex work. Rather than figure out how to proactively prevent this content from being posted, sites like Craigslist simply shut down their entire Personals section, and sites like Backpage simply shut down entirely. This probably does not decrease the amount of sex trafficking, but, as the DOJ argued at the time, probably does make it harder to detect and prosecute. This doubtless also contributed to Tumblr’s decision to ban all adult content (as well as “female-presenting nipples”), and Facebook’s decision to start shutting down communities that kinda sorta hint at porn or sex work, even in jest. Needless to say this has been to the detriment of online communities for sexual minorities.
So, keep an eye on this. If you like commenting on political blogs, it may soon be relevant to you.
(But don’t take my word for it. TechDirt has a wonderful post going over all this: Hello! You’ve Been Referred Here Because You’re Wrong About Section 230 Of The Communications Decency Act)
Elizabelle
Thanks for this, M4. Good background.
Baud
I thought this was a pet and culture blog.
Major Major Major Major
@Baud: those too!
Roger Moore
I think people believe this because that’s how common carrier status works in other forms of communication and transportation. If your phone company wants to avoid liability for how you use your phone, or FedEx wants to avoid liability for what you’re shipping, they can’t discriminate based on how their services are used. People assume internet providers have the same kind of relationship.
Major Major Major Major
@Roger Moore: also because politicians who know better are saying it constantly.
Fair Economist
Removing Section 230 for advertising would really help put the brakes on Facebook and other disinfo-driven social media.
debbie
Well, since I can still say it, I’m laughing my ass off at the New York Post’s calling Trump the King Lear of Mar-a-Lago! ?
Chief Oshkosh
@Baud: Pets have nipples, too!
Major Major Major Major
@Fair Economist: I’m having a hard time parsing this into an actual policy? Sites would be legally responsible for ads placed on them? what about syndicated ads?
Bill Arnold
The eff.org explainer is pretty OK:
https://www.eff.org/deeplinks/2020/12/section-230-good-actually
Elimination or even most reforms (that are not strengthening) of section 230 would shut most online venues for free speech. Shutdown of free speech is the intent, IMO. Politicians, media companies, advertisers/marketing people and others want control over political speech unsullied by ordinary US citizens, and eliminating it online for the vast majority of US citizens is a way to do this.
Not everyone is a F2F activist. Not everyone has a paid political punditry gig. Etc.
And the side effects would be shutdown of non-political speech as well; e.g. a doll club would need liability protection for comment sections and would need to censor all content before publishing it, or get lawyers to sign off on some workaround for whatever the rules are re liability.
People would be able to shut down free speech venues maliciously, by deliberately posting content that gets them sued or prosecuted. Grey-area businesses would be set up to do this for a modest fee.
Brachiator
My thanks as well. This is great stuff. I will have to come back and read more later. But so far, just a lot of good stuff.
Chief Oshkosh
@Bill Arnold:
Finally! A business plan for our times!
WaterGirl
I like this passionate and pissed off (in a good way) M4!
debbie
What are the options to protecting free speech and minimizing the ridiculous, lunatic disinformation? Or is there nothing to be done?
Brachiator
Yes. Yes. Yes.
A lot of this is connected to the phony idea that Google algorithms “punish” conservative messages. But the algorithms reflect what people are interested in. Sometimes this is good, sometimes it is bad. But imagine if some people insisted (and some have) that posts asserting that the Earth is flat or that the moon landing is a hoax be given absolutely the same weight as, for example, posts about the upcoming Mars rover landing in February or the Great Conjunction.
Major Major Major Major
@Brachiator: thanks!!
Fair Economist
@Major Major Major Major: Yeah, sites would be liable for ads placed on them, syndicated or otherwise.
Roger Moore
@debbie:
Whatever those options are, they have little to do with Section 230. It’s possible right now for providers to screen out lunatic disinformation. It’s just that the biggest ones don’t want to because lunatic disinformation creates high engagement (i.e. captures eyeballs) and thus is profitable. Also, too, the worst offenders, especially Facebook, have been captured by political forces that want to preserve the ability to spread disinformation.
artem1s
True, the issue over 230 is not well understood. But as long as websites are selling ad space to soft porn sites and conspiracy nutjobs, I’m not inclined to be sympathetic to their wanting to not get sued over poorly monitored advertising content.
RSA
Could you flesh this out a bit? I don’t understand how proposed changes would empower a company like Facebook.
Major Major Major Major
@Fair Economist: this would probably make it really hard for balloon-juice to support itself.
@artem1s: so sites like this blog.
Roger Moore
@RSA:
He explains it in the previous sentence: regulatory capture. The very realistic worry is that whatever regulator is set up to control what happens online, they will wind up being in the pocket of today’s big internet companies. The big players will have the resources to keep the regulators happy, while newcomers who don’t already have a good relationship with the regulators and don’t have the resources to meet their demands will be effectively shut out. Thus the regulation will wind up entrenching the existing dominant players.
Major Major Major Major
@RSA: regulatory capture. Facebook is actually begging lawmakers to impose certain kinds of restrictions because only companies like Facebook are big enough to comply. Prevents the formation of Facebook competitors.
ETA as Roger explains above!
fake irishman
Glad to see your pushback on this, M^4. Nice pithy explainer. I’ve not entirely sorted out how I feel about this whole thing, but many of my fears dovetail with yours.
PJ
Google and Facebook have made fortunes by ripping off artists (where would they be without “content”?) Getting rid of Section 230 would finally allow artists to sue them for infringement. If that puts Google and Facebook out of business, so much the better. If that seriously handicaps all social media, so much the better.
(BTW, Facebook has been stamping on human faces for years.)
PsiFighter37
There is no doubt that Section 230 has enabled the rise of many of the megacap technology companies that exist today, and allow them to operate in a manner that is highly intrusive to one’s privacy. But the goal of the GOP here is completely tangential to that and wants to completely destroy the regulation. I think there are definitely parts of 230 that need to be rewritten now that the Internet has been around for 25 years and evolved past AOL and Netscape, but ditching it outright is a terrible idea.
Major Major Major Major
@PJ:
more likely this would actually empower copyright trolls and get rid of every Web 2.0 site other than Google and Facebook. There are lots of problems on the internet and we need some better regulations, but Section 230 is really not the place for reforms.
Major Major Major Major
@PsiFighter37: could you elaborate on the connection between Section 230 and privacy?
PsiFighter37
@Major Major Major Major: It’s more that because Section 230 allows for free speech, all of this supposedly unregulated content people are posting has been mass-harvested by technology companies to hone their advertising capabilities to a T.
Sure, tech companies want liability from being sued for what everyone says, but frankly, the reason these companies rake in the kind of money they do from advertisers everywhere is because of the content that people are allowed to post. One of Obama’s books he had on his year-end list last year (‘Surveillance Capitalism’) does a very deep dive into it (although it can be repetitive at times).
RSA
@Roger Moore:
@Major Major Major Major:
Thanks for the explanation! You’ve made it clear—and it’s something I should have understood already.
John Revolta
So, if I start a website, and I decided I want it to be about I dunno, coffee, but not about, say, movies, I guess Ted is saying that I’m a “publisher” because I made a “policy decision”, is that it?
sab
Totally ignorant here, but we can’t stop bad content completely, and who defines bad?
It seems to be Balloon-Juice is more at risk from repealing 230 than filthy rich Facebook and Twitter that can afford to fight back.
Punchy
OT: Holy shit, they’re suing themselves. Just…..WOW.
Major Major Major Major
@PsiFighter37: I’ve worked in surveillance capitalism on and off over the years, on both sides… what we need are privacy laws, and sensible ones, not 230 reform.
I will note that one of said employers is now lobbying heavily for a federal privacy bill in the mold of CCPA and GDPR—and they’re on the advertising side, explicitly angling for regulatory capture. So we should take care that we don’t just do the politician’s syllogism here.
randy khan
It is hard to overstate the importance of Section 230 to the modern Internet in the U.S. (You can make some arguments about whether you *like* the modern Internet, but they mostly have nothing to do with Section 230.)
Forget for a moment the way that Section 230 protects every site that accepts comments or customer reviews. (Yelp doesn’t exist without Section 230; Reddit doesn’t exist without Section 230; heck, Pinterest probably doesn’t exist without Section 230.) It also protects every site that gets ads via a 3rd-party ad network from liability for those ads. For that matter, it protects Lowe’s and Amazon, which get most of their product descriptions from third parties, and would have to spend a lot of money to make sure they’re all right (and even then probably wouldn’t be safe).
I’ve said this before, here and elsewhere, but Section 230 works extremely well for the vast majority of the Internet, and even the vast majority of social media. The problems are big ones, but they actually are confined to just a small number of sites, and that’s nowhere sufficient to throw out the baby with the bathwater.
Major Major Major Major
@sab: yeah this is pretty much correct.
@John Revolta: he would of course tell you you’re too stupid to understand, but yes.
randy khan
@Major Major Major Major:
Exactly. That should be a much higher priority. And Facebook’s stupid campaign against Apple’s decision to allow people to decide how their information will be shared tells you just how important it is.
trollhattan
@Punchy:
Whenever I see Gohmert mentioned I can’t help but laugh. This should be good!
Roger Moore
@PJ:
Suits over copyright infringement would be handled under DMCA (Digital Millennium Copyright Act) not under CDA (Communications Decency Act). DMCA gives different kinds of protections to companies like Facebook and Google. As long as they have a copyright resolution procedure that’s spelled out in DMCA, they can’t be sued for users posting their material.
Tenar Arha
I’m still in the camp that it isn’t section 230 that’s the problem. It’s size, design, & a refusal to enforce one’s own rules.
1) First, these are all clearly internet monopolies, & thus we should break up these services (FB’s genocide in Myanmar & their lying about the stats in the “shift to video” has always been enough to convince me we need to do something)
2) Too many of these services, with the way they are designed, reward abuse & lies with more attention. I still think that this could be changed. Clearly the services think so too, or they wouldn’t be bending over backwards to ignore their own engineers & designers who have shown their algorithms can be re-tuned to pass along less disinformation.
3) Last, I think it’s primarily that the biggest players refuse to enforce their own rules as detailed in the EULA that everyone that uses their services “signs” when they join. So they don’t enforce their own rules, & then they don’t ban people who figure out ways to abuse their services.
The obvious end user license agreement (EULA) example on Twitter is DJT. He regularly broke the EULA he’d agreed to, yet AFAICT he was never locked out or banned from his account before he ran for President (& then used that stupid “newsworthy” excuse they came up with). Any ordinary user would have been banned (after like the 4-5th time) for the things DJT said on Twitter regularly wo even receiving a timeout. & Of course, if you’re just too big to enforce your own EULA, well then that’s the sign that you might be ripe for breakup.
Roger Moore
@Tenar Arha:
Somebody actually tested this. They set up an account that just copied everything Trump said. They were banned in a couple of days.
Major Major Major Major
@Tenar Arha: One thing worth noting is that any explicitly delineated moderation policies will get gamed, and “don’t be a dick and we get to decide what that means” is perfectly acceptable in such a fast-changing space.
LeftCoastYankee
It does seem like it would be possible to differentiate between sites which allow user posting and advertising services for those sites.
The advertising services should be required to implement something more stringent in validating their advertisers than just being automated gateways for data harvesting, malware and misinformation.
Considering the entire economic foundation of the internet is advertising, if this is done right it could encourage newer smaller players in the market as well.
sab
@trollhattan: Gohmert suing Pence. Dumb and dumber, but I don’ t know who is which.
Eunicecycle
@sab: I think Gohmert became the dumbest member of Congress after Pence left. So I guess technically that makes Pence dumber
Doug R
Better privacy laws with functionality not tied to agreements as much as possible. Opt-in agreements where you are PAID (it doesn’t have to be much) for collection of your information. Disclaimers on ads (clickable or plain text) that explain why that ad was chosen.
More editorial control over content that is paid for, anything like political ads with a large disclaimer across it “Political Advertisement: Content not fact-checked”
The ability of anyone who’s been banned to have a process where they can request a return, possibly with a big disclaimer on each post.
Roger Moore
@Eunicecycle:
Gohmert: When I left you I was but the learner. Now I am the master.
Pence: Only a master of dumbfuckery, Louie.
Major Major Major Major
@Doug R:
Why should I have to let some spammer who I banned from my blog harangue me?
Tenar Arha
@Roger Moore: Yep.
@Major Major Major Major: Yep. Yep.
You can automatically filter out only a tiny bit of trouble by flagging keywords, but limiting internet abuse needs “don’t be a dick” & “don’t game the system” & lots of ban hammers, combined w lots of different kinds of individually controlled blocking tools. All of it should be part of any social media design. If the site doesn’t start there, it intends to be a sewer.
davecb
There is a problem with media companies representing their own decisions as those of their commentators. If I am a traditional newspaper, and chose to put a letter to the editor on the front page, I have exercised editorial judgement.
If the letter is a criticism of a police constable for murdering an innocent black man, many will agree with me, and say I should publish it, and should not be held responsible for the opinion of the writer.
If, on the other hand, I happen to be in a certain western province, many will disagree, say the police were in the right, and that I cannot chose to feature my correspondents saying such things without taking responsibility for my choice.
Which case applies? Facebook arguably chooses to put evil on their “front page”… are they a publisher? Am I?
Major Major Major Major
@davecb:
No. I quote the relevant part of the very clear law.
davecb
@Major Major Major Major: [Belatedly] Understood: there’s little question in the US. In Canada, there’s no case law on it, so the question is wide open.
There’s also the public-policy question, of whether the U.S. law should allow Facebook to represent editorial decisions as equivalent to the acts of a third party (;-))