Independent Reviewer of Terrorism and State Threat Legislation
terrorwatchdog.bsky.social
Independent Reviewer of Terrorism and State Threat Legislation
@terrorwatchdog.bsky.social
Jonathan Hall KC
To which the British retort might be: disinformation can have an impact on the battlefield. At the moment this feels like an unbridgeable divide…/ends
February 19, 2025 at 4:09 PM
Even then, will tech companies operating under First Amendment decide to remove because of British squeamishness? Vice President Vance might say, and he has a point where low trust in institutions, why would you let a tech company or regulator decide what is disinfo?…/6
February 19, 2025 at 4:09 PM
And assume it has capacity to remove adapted disinformation as bad actors respond to moderation efforts…/5
February 19, 2025 at 4:09 PM
Where tech companies must remove content amount to foreign interference as a *priority offence*. Even if assume tech company has capacity to identify a foreign link, not just commercial click-bait (major assumption)…/4
February 19, 2025 at 4:09 PM
…under section 13 National Security Act 2023. But in the real world they will be untraceable and abroad, which points to need for prevention. Enter the Online Safety Act 2023…/3
February 19, 2025 at 4:09 PM
For sake of argument, assume Russia had a plan. If Russian operatives used X/Twitter to try to influence UK political leaders in their decision-making that would be foreign interference triable in the UK…/2
February 19, 2025 at 4:09 PM
Thank you
January 27, 2025 at 12:59 PM
Have already benefited from content from @danieldesimone.bsky.social @lizziedearden.bsky.social @kenanmalik.bsky.social and many others on this topic.../ends
January 27, 2025 at 9:59 AM
Danyal Hussain (Satanism), Jake Davison (incel beliefs), Mohammed Al Swealmeen (Liverpool Women's Hospital), the Northallerton teenagers (Columbine plot), Gotterdammerung teenager (mass shooting plot), Thomas Huang (school hammer attack), Damon Smith (unexploded tube bomb)…/2
January 27, 2025 at 9:59 AM
But also how platforms will deal with organic ie normal human-distributed viral content that happens to be false and is used to drive violence or is calculated to have interference effect…/ends
January 14, 2025 at 8:56 AM
The question I have on fact-check demise is whether these capabilities of scanning for coordination will be canned…/9
January 14, 2025 at 8:56 AM
However State Threat actors can also amplify true information - eg true details of a terror attack - to suggest Broken Britain…/8
January 14, 2025 at 8:56 AM
But in practice Meta has major capabilities for spotting ‘coordinated inauthentic behaviour’ on its platforms - think Russian controlled bot farm putting out and amplifying disinformation…/7
January 14, 2025 at 8:56 AM
Net effect of removing fact-checking but not moderation could make it relatively easier than before for online foreign interference…/6
January 14, 2025 at 8:56 AM
Of course this is crude because some content that encourages terrorist violence could have strong truth value eg reporting from warzone…/5
January 14, 2025 at 8:56 AM
But moderation meaning removal is about content-status rather than truth value: is it badged propaganda from proscribed terror group, or encourages violence?…/4
January 14, 2025 at 8:56 AM
Since fact-checking is truth evaluation, its removal means in principle more disinformation (though Zuckerberg right about risk of human fact-checking bias) and therefore greater risk of state exploitation…/3
January 14, 2025 at 8:56 AM
Both terrorism content and foreign interference content are now priority illegal content under Online Safety Act in the UK…/2
January 14, 2025 at 8:56 AM