Timothy Graham
banner
timothyjgraham.bsky.social
Timothy Graham
@timothyjgraham.bsky.social

Associate Professor at Queensland University of Technology. Computational communication, social theory. Interests: Propaganda, dark political communication, the relationship between technology, truth, and knowledge

Communication & Media Studies 25%
Political science 18%
This is Kristina Solovyova. Russian terrorists killed her in Zhytomyr striking residential buildings with drones on December 23. She was only 4🕯
Tonight, I spent a wonderful Christmas Eve with my family. But I couldn’t stop thinking about the Ukrainian soldiers who are paying the ultimate price so that we can have a peaceful Christmas Eve.

#StandWithUkraine
Former head of trust and safety at Twitter, Yoel Roth, demonstrating the intellectual dishonesty of “In Covid’s Wake” by showing how they distorted his own words to make them say the opposite of what he was arguing.
A small (personal) example of this book’s intellectual dishonesty:

My father-in-law is reading In Covid’s Wake, and excitedly told me he found a passage where I’m quoted. The quote in question is me saying the FBI worked to censor speech on social media.

Huh? When did I say that?!

Merry Christmas to everyone!
A small (personal) example of this book’s intellectual dishonesty:

My father-in-law is reading In Covid’s Wake, and excitedly told me he found a passage where I’m quoted. The quote in question is me saying the FBI worked to censor speech on social media.

Huh? When did I say that?!

This is just weird.

Thanks Reviewer A! I mean J

A friendly community reminder not to repost or share unverified information, no matter how tempting it might be. Join the revolution today!

Reposted by Timothy Graham

The FBI sent it to handwriting analysis. Outcome of that is unclear, as in, I haven’t found it in the files yet if it is indeed there.
Bringing back Limewire to illegally rip copies of reporting suppressed by the government is definitely some cyberpunk shit

Quite an experience to live in fear, isn't it?

Did we learn nothing from the plot of Blade Runner?

This is not a formal theory - it is a waiting at the vet theory

Trump: 0 minutes

Whereupon it is not only shit, but we're forced to use it

┌────────────────┐

The Four Stages of Dante's Platform

└────────────────┘

(1) Datafication
(2) Monetisation pivot
(3) Enshittification
(4) Infrastructural capture

└────────────────┘

JK Rowling theory confirmed

This is somehow even worse than enshittification. What's the level under that

Reposted by Timothy Graham

Belief in science-related conspiracy theories is not just a matter of knowledge: The democratic quality of countries as a protective factor
@irelopeznavarro.bsky.social & Santos-Requej

journals.plos.org/plosone/arti...

Often the real question is not about the technology, but what does it mean to be human?

Reposted by Timothy Graham

Pulled out of class, held back after school and forced to prove they're not AI cheats, students say NSW high schools are pitting them against faulty AI detectors.
High schools demand students prove they are not AI cheats
Pulled out of class, held back after school and forced to prove they're not AI cheats, students say NSW high schools are pitting them against faulty AI detectors.
www.abc.net.au

Reposted by Timothy Graham

The death toll after a russian missile strike on Odesa has risen to 8, with 27 others injured.

Some of the victims were on a bus that ended up at the epicenter of the attack.

Will read with interest - thanks, Edouard.

I'm not disagreeing with you here. I was curious about the debate and what was at stake / what the details of the issue were. The insights into why they are limited are quite interesting and I wanted to understand it better.

Thanks. I feel that this is a very prescient example of the limits - the inability perhaps - of LLMs to reason. They don't understand logic - they just see a pattern and can't see or think about the limits of determinacy here - and they just parrot what's likely to come next, rather than what should

I'm just a computer science hack when it comes to mathematics but is the issue here that LLMs are vacuuming up first year calc I mistakes in their training data and reproducing them through their "reasoning" (which is just next token pattern matching)?

Ah, the old infinity times zero issue right?