Alice Hunsberger
@aagh.bsky.social
Trust & Safety loudmouth.
(Head of T&S at Musbi; writes T&S Insider; host of Trust in Tech podcast)
More here: https://alicelinks.com/about-alice
(Head of T&S at Musbi; writes T&S Insider; host of Trust in Tech podcast)
More here: https://alicelinks.com/about-alice
Frontline teams will tell you that hate from customers is nothing new, but I really do think it will ramp up over the next few years, and I want to make sure our frontline teams don't suffer for it.
/🧵
/🧵
January 21, 2025 at 5:34 PM
Frontline teams will tell you that hate from customers is nothing new, but I really do think it will ramp up over the next few years, and I want to make sure our frontline teams don't suffer for it.
/🧵
/🧵
Create resources and support for your users/ customers who may also be the target of harassment and hate. Signal to them in signs/ messages/ FAQs/ knowledge bases/ etc. that your team will support them.
January 21, 2025 at 5:34 PM
Create resources and support for your users/ customers who may also be the target of harassment and hate. Signal to them in signs/ messages/ FAQs/ knowledge bases/ etc. that your team will support them.
Let your team use fake names when responding to the public, or (even better) don't use names at all.
Recognize that some open ways of signaling support (i.e. putting pronouns in your signature) will also open people up to harassment.
Allow folks to make decisions based on what's right for them.
Recognize that some open ways of signaling support (i.e. putting pronouns in your signature) will also open people up to harassment.
Allow folks to make decisions based on what's right for them.
January 21, 2025 at 5:34 PM
Let your team use fake names when responding to the public, or (even better) don't use names at all.
Recognize that some open ways of signaling support (i.e. putting pronouns in your signature) will also open people up to harassment.
Allow folks to make decisions based on what's right for them.
Recognize that some open ways of signaling support (i.e. putting pronouns in your signature) will also open people up to harassment.
Allow folks to make decisions based on what's right for them.
Create psychological safety for your team.
Let them know that you have their back.
Listen to them.
Check in on them.
Make sure they have benefits that cover mental health support.
We're all going to need it.
Let them know that you have their back.
Listen to them.
Check in on them.
Make sure they have benefits that cover mental health support.
We're all going to need it.
January 21, 2025 at 5:33 PM
Create psychological safety for your team.
Let them know that you have their back.
Listen to them.
Check in on them.
Make sure they have benefits that cover mental health support.
We're all going to need it.
Let them know that you have their back.
Listen to them.
Check in on them.
Make sure they have benefits that cover mental health support.
We're all going to need it.
It can be tempting to ask employees who are part of a marginalized community to help you create inclusive policies.
@anikacolliernavaroli.com calls this "compelled identity labor": hire people whose explicit job is to be an expert, instead of voluntelling your employees who have other jobs to do.
@anikacolliernavaroli.com calls this "compelled identity labor": hire people whose explicit job is to be an expert, instead of voluntelling your employees who have other jobs to do.
January 21, 2025 at 5:33 PM
It can be tempting to ask employees who are part of a marginalized community to help you create inclusive policies.
@anikacolliernavaroli.com calls this "compelled identity labor": hire people whose explicit job is to be an expert, instead of voluntelling your employees who have other jobs to do.
@anikacolliernavaroli.com calls this "compelled identity labor": hire people whose explicit job is to be an expert, instead of voluntelling your employees who have other jobs to do.
Create a "no questions asked" escalation policy, so that frontline staff can escalate to a manager if they feel unsafe or unable to answer a question.
Make sure that escalation chain goes all the way up to VP or C-Suite level so everyone is supported.
Make sure that escalation chain goes all the way up to VP or C-Suite level so everyone is supported.
January 21, 2025 at 5:30 PM
Create a "no questions asked" escalation policy, so that frontline staff can escalate to a manager if they feel unsafe or unable to answer a question.
Make sure that escalation chain goes all the way up to VP or C-Suite level so everyone is supported.
Make sure that escalation chain goes all the way up to VP or C-Suite level so everyone is supported.
Write policies about expected user/ customer behavior, make them public, and hold people to them.
"We will ban you if you disrespect or threaten our staff", for example.
Or "We will ban you if you report trans people simply for being trans."
"We will ban you if you disrespect or threaten our staff", for example.
Or "We will ban you if you report trans people simply for being trans."
January 21, 2025 at 5:29 PM
Write policies about expected user/ customer behavior, make them public, and hold people to them.
"We will ban you if you disrespect or threaten our staff", for example.
Or "We will ban you if you report trans people simply for being trans."
"We will ban you if you disrespect or threaten our staff", for example.
Or "We will ban you if you report trans people simply for being trans."
Get really clear with senior leaders of the company you work for about corporate values and how to uphold them.
Create tricky hypothetical scenarios (i.e. your biggest client sends a racist email; someone threatens to sue you for having a DEI program) and get answers BEFORE you need them.
Create tricky hypothetical scenarios (i.e. your biggest client sends a racist email; someone threatens to sue you for having a DEI program) and get answers BEFORE you need them.
January 21, 2025 at 5:29 PM
Get really clear with senior leaders of the company you work for about corporate values and how to uphold them.
Create tricky hypothetical scenarios (i.e. your biggest client sends a racist email; someone threatens to sue you for having a DEI program) and get answers BEFORE you need them.
Create tricky hypothetical scenarios (i.e. your biggest client sends a racist email; someone threatens to sue you for having a DEI program) and get answers BEFORE you need them.
Huge thanks to the integrity institute and TSPA/ TrustCon for enabling these kinds of discussions among t&s folks.
If you know of other conversations/ resources in this area, or are an expert and want to be on the podcast or chat TrustCon proposals, let me know!
If you know of other conversations/ resources in this area, or are an expert and want to be on the podcast or chat TrustCon proposals, let me know!
January 10, 2025 at 9:34 PM
Huge thanks to the integrity institute and TSPA/ TrustCon for enabling these kinds of discussions among t&s folks.
If you know of other conversations/ resources in this area, or are an expert and want to be on the podcast or chat TrustCon proposals, let me know!
If you know of other conversations/ resources in this area, or are an expert and want to be on the podcast or chat TrustCon proposals, let me know!
3️⃣ @anikacolliernavaroli.com writes about the harms for moderators from marginalized communities asked to work on content that attacks them.
www.cjr.org/tow_center/b...
www.cjr.org/tow_center/b...
January 10, 2025 at 9:32 PM
3️⃣ @anikacolliernavaroli.com writes about the harms for moderators from marginalized communities asked to work on content that attacks them.
www.cjr.org/tow_center/b...
www.cjr.org/tow_center/b...
2️⃣ @jenniolsonsf.bsky.social from GLAAD talks about advocating for the LGBTQ+ community with Meta; the challenges of balancing free speech w/ protecting marginalized communities; & suggestions for folks working at social media platforms to advocate for change.
integrityinstitute.org/podcast/its-...
integrityinstitute.org/podcast/its-...
January 10, 2025 at 9:30 PM
2️⃣ @jenniolsonsf.bsky.social from GLAAD talks about advocating for the LGBTQ+ community with Meta; the challenges of balancing free speech w/ protecting marginalized communities; & suggestions for folks working at social media platforms to advocate for change.
integrityinstitute.org/podcast/its-...
integrityinstitute.org/podcast/its-...
1️⃣ Nadah Feteih discusses how tech workers (in integrity and t&s teams) can speak up about ethical issues at their workplace; activism from within the industry; compelled identity labor, balancing speaking up and staying silent, and more.
integrityinstitute.org/podcast/work...
integrityinstitute.org/podcast/work...
January 10, 2025 at 9:28 PM
1️⃣ Nadah Feteih discusses how tech workers (in integrity and t&s teams) can speak up about ethical issues at their workplace; activism from within the industry; compelled identity labor, balancing speaking up and staying silent, and more.
integrityinstitute.org/podcast/work...
integrityinstitute.org/podcast/work...
THIS IS WHAT STOOD OUT TO ME. As someone who had to deal with user-report only systems for years… they do not work.
January 10, 2025 at 3:55 PM
THIS IS WHAT STOOD OUT TO ME. As someone who had to deal with user-report only systems for years… they do not work.
It's fascinating because right now content moderation and general vibes is a main differentiator between Threads and X. When Threads feels more like X, they'll be closer competitors than ever before.
Looking forward to more people here on Bluesky :)
Looking forward to more people here on Bluesky :)
January 9, 2025 at 5:11 PM
It's fascinating because right now content moderation and general vibes is a main differentiator between Threads and X. When Threads feels more like X, they'll be closer competitors than ever before.
Looking forward to more people here on Bluesky :)
Looking forward to more people here on Bluesky :)
Actually 1 more thing:
This allows meta to dodge responsibility. “The users don’t like it. They reported it. It’s not us.”
It won’t make moderation more fair or better. It’ll be less consistent.
But gives Meta an excuse that is more politically accepted right now.
This allows meta to dodge responsibility. “The users don’t like it. They reported it. It’s not us.”
It won’t make moderation more fair or better. It’ll be less consistent.
But gives Meta an excuse that is more politically accepted right now.
January 9, 2025 at 1:43 PM
Actually 1 more thing:
This allows meta to dodge responsibility. “The users don’t like it. They reported it. It’s not us.”
It won’t make moderation more fair or better. It’ll be less consistent.
But gives Meta an excuse that is more politically accepted right now.
This allows meta to dodge responsibility. “The users don’t like it. They reported it. It’s not us.”
It won’t make moderation more fair or better. It’ll be less consistent.
But gives Meta an excuse that is more politically accepted right now.
This, combined with the rollback of hate policies, is REALLY going to change the vibes of Meta-run platforms.
/🧵
/🧵
January 9, 2025 at 1:21 PM
This, combined with the rollback of hate policies, is REALLY going to change the vibes of Meta-run platforms.
/🧵
/🧵
Honestly, I feel it’s often better to just not have the rule at all if you can’t proactively detect and remove violations.
Automated detection isn’t perfect by any means, but it’s a heck of a lot better than user reports alone.
Automated detection isn’t perfect by any means, but it’s a heck of a lot better than user reports alone.
January 9, 2025 at 1:21 PM
Honestly, I feel it’s often better to just not have the rule at all if you can’t proactively detect and remove violations.
Automated detection isn’t perfect by any means, but it’s a heck of a lot better than user reports alone.
Automated detection isn’t perfect by any means, but it’s a heck of a lot better than user reports alone.
Other users will have their content removed after being reported, but feel it’s unfair because so many other people got away with it.
January 9, 2025 at 1:20 PM
Other users will have their content removed after being reported, but feel it’s unfair because so many other people got away with it.
Relying on user reports alone means that the platform will have very spotty enforcement of some rules.
Many users will get away with rules-violating behavior because it is never reported.
Many users will get away with rules-violating behavior because it is never reported.
January 9, 2025 at 1:20 PM
Relying on user reports alone means that the platform will have very spotty enforcement of some rules.
Many users will get away with rules-violating behavior because it is never reported.
Many users will get away with rules-violating behavior because it is never reported.
Policies are only as good as ENFORCEMENT— and consistent enforcement at that.
I have learned this the hard way, when I was head of t&s at a platform with little no to automated detection.
I have learned this the hard way, when I was head of t&s at a platform with little no to automated detection.
January 9, 2025 at 1:20 PM
Policies are only as good as ENFORCEMENT— and consistent enforcement at that.
I have learned this the hard way, when I was head of t&s at a platform with little no to automated detection.
I have learned this the hard way, when I was head of t&s at a platform with little no to automated detection.
— if other people are being hateful and harassing others, then users will want to fight back/ pile on/ get involved.
… or they will want to leave.
… or they will want to leave.
January 9, 2025 at 1:20 PM
— if other people are being hateful and harassing others, then users will want to fight back/ pile on/ get involved.
… or they will want to leave.
… or they will want to leave.