ThinkingSapien
thinkingsapien.bsky.social
ThinkingSapien
@thinkingsapien.bsky.social
No. That sounds like a variation of a FEC complaint that the RNC made against Twitter some years back. That complaint got dismissed.

The protections of 1A don't generally diminish with size or influence. This would just be an instance of 1A exercise.
November 5, 2025 at 3:30 AM
Huh? That would not remove Musk's right to make his statement.
November 5, 2025 at 2:35 AM
There have been humans who have made deadly informational mistakes and escaped liability for it. Im not certain that AI would be radically different.
November 5, 2025 at 2:32 AM
I dont think that would help in the way you might think. It could make people's posts more vulnerable to the heckler's veto.
November 5, 2025 at 2:30 AM
Sounds like you and the math teachers from last week can relate.
November 5, 2025 at 2:27 AM
The fairness doctrine only ever applied to the use of govt owned radio spectrum (OTA radio and TV). I think 1A would prevent it from applying elsewhere.

False statements are not generally unlawful. 1A makes it difficult to make them so..
November 2, 2025 at 2:08 PM
I've not seen that there is such a line. One won't be treated as the publisher/speaker of posts because of having removed/arranged other posts any more than a book store would be treated as the author of a book for having removed/arranged other books.
November 2, 2025 at 2:04 PM
...CSAM is largely handled by Microsoft PhotoDNA and user reporting.

Im not sure why you think this is a stretched analogy. The same laws that have protected bookstores from liability for content have protected online spaces. Email services sorting and filtering are protected by §230.
October 31, 2025 at 1:21 PM
In the USA, expressions being designed to moment hate, division, or being a lie isn't sufficient to make it unprotected speech. For malicious speech, it would depend on more factors.

Word filters in online posts have been present since at least 1990s...
October 31, 2025 at 1:21 PM
Sorting and organizing my content doesn't make it more or less unlawful. It isn't much different than rearranging books in a book store. If I go to a book I like, related books are nearby. But one hasn't generally created "bad" info by organizing those books.
October 31, 2025 at 11:50 AM
As a person that only occasionally logs into FB and for reading only, I appreciate that posts about life-changing events (including deaths) show up for me first instead of being buried in a chronological sort algorithm, which prioritizes recency instead of signals for importance.
October 31, 2025 at 11:46 AM
I started to find some information services, such as email, less useful until the services started prioritizing, filtering, and separating content for me. Same with my text messages. I would hate for this functionality to be removed to avoid arbitrary litigation.
October 31, 2025 at 11:44 AM
One justification is the general lack of specific awareness of the actual contents. When there are large amounts of information, having some way of sorting and prioritizing it is useful. This is generally automated and done without aomeone reading posts. Penalizing organization punishes everyone.
October 31, 2025 at 11:42 AM
...specific knowledge of the image or the facts behind it. Even without §230, the social media services could probably use Smith v California as a defense, among other defenses. But it would depend on the claims someone makes.
October 30, 2025 at 6:31 PM
The person who initiated the generation of the images would likely be the one that has liability for that. They are the initiator of both the production and publication of the image. The party whose tools they used and the service 9n whi h they published it likely have no ...
October 30, 2025 at 6:30 PM
There is no general obligation to be supportive of open debate. There is no liability for having not done so.
October 30, 2025 at 6:24 PM
Given the outcome of the ChatGPT defamation case, I think that the generative AI suppliers will be okay. As for other service providers, "amplified" is a bit nebulous. I think it can make for an ambiguous law.

reason.com/wp-content/u...
October 30, 2025 at 5:27 PM
Huh? What does §230 have to do with any of this? That only concerns certain civil liability issues.
October 30, 2025 at 12:39 PM
It results in giving power to the heckler's veto. The claims of good speakers and bad alike get silenced.
October 30, 2025 at 11:07 AM
No amount of upping moderation staff will make them more informed of the facts (or the lack of them) behind a post.

Consider the #MeToo movement. Doubling the staff doesn't tell a service if someone's claim is true. Their safe option would be to delete all such posts irrespective of truth.
October 30, 2025 at 11:06 AM
A whistleblower warned Oakmont was engaged fraud. Oakmont successfully held the service on which the messages were posted liable for defamation.

Such liability makes for a great tool for silencing true but critical speech.
October 29, 2025 at 11:34 PM
"Enable" is kind of a low bar. Paper and pencils enable defamation. Cars enable quick crime get-aways. "Enable" isn't indicative that someone is at fault.

Increasing the risks of hosting content affects good actors and bad. True words being labeled defamation was part of §230's inspiration.
October 29, 2025 at 11:32 PM
What does repealing §230 have to do with [de]monetizing content?
October 29, 2025 at 11:28 PM
How woukd a service protect someone from harassment? Especially distributed harassment, where the efforts of any one person might not be significant alone. It is the aggregate effect that wears away at a person.
October 29, 2025 at 11:25 PM