Ellen Judson
banner
ellenejudson.bsky.social
Ellen Judson
@ellenejudson.bsky.social
Disinformation investigator, tech policy nerd, philosophy, human rights, climate. Views my own. She/her

Also on https://www.linkedin.com/in/ellen-judson-75918241/
In the meantime, I hope the UK sees through the siren call of these so-called 'free speech' measures from across the pond and doesn't try to follow suit, but works to ensure that the Online Safety regime genuinely protects users' rights #OnlineSafetyAct
January 7, 2025 at 8:53 PM
We'll see what the specific policy changes are in the next few weeks - but the outlook doesn't look good
January 7, 2025 at 8:53 PM
and b), that leaves a serious chunk of serious harms unaddressed. Especially since 'for less severe policy violations, we’re going to rely on someone reporting an issue before we take any action.' See this investigation I worked on - www.globalwitness.org/en/campaigns...
Meta slow to review hate speech on Senate candidate pages | Global Witness
As the US election approaches, a Global Witness investigation finds that Meta's moderation systems are struggling to keep pace with hate speech
www.globalwitness.org
January 7, 2025 at 8:53 PM
Of course, the rebuttal is that for 'really bad' things (illegal content), they will still crack down on it. A) accurate identification of all and only illegal content is v v difficult bsky.app/profile/elle...
There is also a nod in that paragraph to the issue we raised around the illegal contents guidance incentivising overmoderation, despite being allegedly designed to do the opposite - but Ofcom (quite reasonably) points out this risk arises from the Act itself www.tandfonline.com/doi/full/10....
The Bypass Strategy: platforms, the Online Safety Act and future of online speech
In this paper, we argue that the Online Safety Act 2023 and Ofcom’s guidance incentivise online platforms to adopt a ‘Bypass Strategy’, where they create and enforce content moderation rules that a...
www.tandfonline.com
January 7, 2025 at 8:53 PM
Elsewhere in the announcement, the initial move to more content moderation is blamed on societal and political pressure. It implies they should have resisted Society in order to uphold fundamental principles. But that apparently doesn't apply to resisting political pressure to reduce moderation...
January 7, 2025 at 8:53 PM
that's the point of human rights constraining (at least in theory) political decision-making
January 7, 2025 at 8:53 PM
If people launching attacks against and inciting violence against immigrants and LGBT+ people is becoming more common, platforms should be working even harder to protect them - not saying 'well, that's just what people think nowadays'
January 7, 2025 at 8:53 PM
This change smacks of a seriously insidious kind of majoritarianism: that because lots of people are now saying a thing, that that means that thing ought to be allowed to be widely said.
January 7, 2025 at 8:53 PM
My heart goes out to those in the US, especially immigrants and trans people, who it looks like could be especially badly affected by these policy changes.
January 7, 2025 at 8:53 PM
Similarly, to move from 'our rules are prone to over-enforcement' (even if true) to 'so we should scrap rules' is a stretch.
January 7, 2025 at 8:53 PM
Fact-checking is by no means perfect. But it does mean you can give users more context and information easily, and demoting things that fail fact-check mitigates against reach and virality WITHOUT content takedown, to help preserve freedom of expression
January 7, 2025 at 8:53 PM
The leaps made in this announcement are glaring. If speech constitutes legitimate debate, then any consequence - including 'intrusive labels' - is apparently the same thing as censorship
January 7, 2025 at 8:53 PM
But that, of course, doesn't help with perception of bias, which is the real worry with the new administration (key word in this post - 'concern') www.threads.net/@zuck/post/D...
Mark Zuckerberg (@zuck) on Threads
5/ Move our trust and safety and content moderation teams out of California, and our US content review to Texas. This will help remove the concern that biased employees are overly censoring content.
www.threads.net
January 7, 2025 at 8:53 PM
And if your fact-checkers really truly are too biased - then you should invest in more, train more, support more, not just abandon the whole concept of independent fact-checking! (I'm seeing a theme here...)
January 7, 2025 at 8:53 PM
And the claims about bias are just a smokescreen. Everyone has bias - not just experts, but also social media users and social media platforms. Independent fact-checkers are part of an information ecosystem that helps to reduce bias by verifying information.
January 7, 2025 at 8:53 PM
So much of the Meta announcement is saying 'we make loads and loads of mistakes'. It's such a bizarre justification for not meeting your responsibilities as a social media platform. We make too many mistakes trying to protect people well enough, so we're just not going to?
January 7, 2025 at 8:53 PM
- the need for which is demonstrated in this story from last month of the horrific experiences of Facebook moderators www.theguardian.com/media/2024/d...
More than 140 Kenya Facebook moderators diagnosed with severe PTSD
Exclusive: Diagnoses part of lawsuit being brought against parent company Meta and outsourcer Samasource Kenya
www.theguardian.com
January 7, 2025 at 8:53 PM
And yes, in some cases over-moderation is a problem with content moderation. But a good way to help that is to improve your content moderation systems - invest more, train more, support your moderators, build on local expertise - rather than just stop doing it. Which takes money and commitment -
January 7, 2025 at 8:53 PM
A reminder that this is the real world - and the real harms - we are talking about: www.bbc.co.uk/news/world-a...
Facebook admits it was used to 'incite offline violence' in Myanmar
Facebook says it is tackling problems highlighted in an independent report on its role in ethnic violence.
www.bbc.co.uk
January 7, 2025 at 8:53 PM
Meta describes some of the consequences of their vision of free expression as necessarily 'messy', 'good, bad and ugly'. This is a classic move to discredit people who raise the alarm about online harms as just not being able to deal with the nuance of the real world.
January 7, 2025 at 8:53 PM