BeeHive Moderation Service
banner
moderation.beehivesafety.com
BeeHive Moderation Service
@moderation.beehivesafety.com
Delivering digital safety and security services and products for individuals, businesses, and governments around the world.

See something? Say something. Tap or click "Subscribe to Labeler", then report content to us from anywhere in Bluesky.
Seen! We'll review this account shortly, thank you!
April 25, 2025 at 10:57 AM
This content has no labels applied by us, so we're unable to advise on any positional context missing. We'll review this account as required.
April 21, 2025 at 12:49 AM
That's a question for a practicing attorney in your local area who is familiar with your area's legal definitions of these matters. I unfortunately cannot provide you with legal advice.
March 13, 2025 at 9:41 PM
Bluesky won't label a labeler, they'll just take enforcing action. Independent services will label labelers. I took a look and didn't spot a report from you, but we do have labels applied to many of these accounts ourselves already.
March 13, 2025 at 9:40 PM
*Otherwise*, that label they applied to your account isn't one shared by Bluesky, so there's nothing to be done about it. You can report a moderation service, TO a moderation service, if that makes you feel better, but...this is how labelers work, nothing to be done about it.
March 13, 2025 at 9:25 PM
If you are...for lack of better wording, hell bent on taking the label personally, you may wish to speak with an attorney if you believe you have a claim that the label is directly libelous or defamatory. There is no guarantee that you'd have a case. Not a lawyer, not your lawyer, not legal advice.
March 13, 2025 at 9:25 PM
Labelers **are meant to be opinionated**. You're not meant to agree with every single label - this IS, moderation in a nutshell. However, Bluesky is not likely to get involved and arbitrate as a "fact finder" in these matters when they are "petty disagreements".
March 13, 2025 at 9:25 PM
Bluesky has no responsibility to revoke a labeler's ability to label. They reserve the right to terminate *abusive* labelers, but when they say "abusive" they mean, spamming your account with racist/homophobic labels to insult you. THAT is what they mean by "abusive".
March 13, 2025 at 9:25 PM
It's important you be able to spot it and simply just that; never a guarantee someone else will spot it (or that they're not being paid to not spot it).
March 13, 2025 at 9:13 PM
Operations like the ones you're hinting at never stop or start, they augment, they change, they continue - your ability to discern and meter trust is what needs to be enhanced here, I promise. The future will only become trickier, the tactics deeper and more convincing.
March 13, 2025 at 9:13 PM
The toughest part is, with or without moderation service/labeler functionality, the financing will still come from state actors that prop the same information and opinions up just in a different way. Hate the players, acknowledge the game.
March 13, 2025 at 9:13 PM
Equally as you've now found out, there are legitimate moderation services and then there are biased ones, it's important to find and stack moderation services that you TRUST to help you out without lying to you - and at least change their minds if they're wrong.
March 13, 2025 at 9:09 PM
It's preferred that only the most serious violations are brought to Bluesky, and having an independent moderation service take the brunt of most moderation "fluff" leaves them free to address serious matters LIKE, CSAM and related.
March 13, 2025 at 9:09 PM
Where appropriate, we can and do escalate content to Bluesky's team for review.

This is infrequent, most content can be handled w/tags and controls that we can provide that solve the user's duress without de-platforming another user.
March 13, 2025 at 9:09 PM
Generally it's not free to run a moderation service/labeler, and it's not something likely to have resources dedicated to it long-term just to screw with users. Most mischief snuffs itself out, and when the account disappears so does the label.
March 13, 2025 at 9:01 PM
Honestly, I would not worry about it. From a few minutes of public sentiment search, it seems PEF is participated in by a very small and biased crowd; I would not worry about it derailing your discovery or etc. Just keep in mind they were an unfair service, forewarn others, and find more trustable.
March 13, 2025 at 9:01 PM
These are the current labels on your Bluesky account globally. None of these are Bluesky labels from Bluesky themselves, none of these impact you algorithmically, but users of these labelers may not be able to see your posts.
March 13, 2025 at 8:55 PM
ClearSky
ClearSky
clearsky.app
March 13, 2025 at 8:44 PM
We see the messages you can include with your reports. If you think there's context we need to know, tell us tell us tell us; we're listening.
March 13, 2025 at 8:44 PM
Feel free to "Subscribe to labeler", then use the "Report" flow like normal. When submitting, make sure we're selected as a recipient, or we may not see your report.

Pro tip, you can send reports to multiple moderation services at the same time as well.
March 13, 2025 at 8:44 PM
I see 1 report in **our** queue that I'm about to take care of.

Do you know of problematic users and content on this platform?

I'd love for us to handle your reports for the matter so that you can get the safety you're expecting from Bluesky themselves. Standing by right now for it, in fact.
March 13, 2025 at 8:44 PM
Report illegal/criminal content to Bluesky. Report "social crimes" to independent moderation services like ours. Let everybody **individually**, do their part that you trust to make your experience safer in a balanced and organized way.
March 13, 2025 at 8:44 PM
If you are submitting every single report of anything potentially unlikable to Bluesky and then being upset they don't action the majority of them, that's them being impartial.

That means what you want is not more moderation from Bluesky, but you want more moderation services in your STACK.
March 13, 2025 at 8:44 PM
Bluesky's trust and safety effort is not centralized, they really only will handle the worst of worst matters like CP, terrorism, and other legal matters.

They encourage you openly to utilize other trusted moderation services to help control what you are exposed to.
March 13, 2025 at 8:44 PM