Jed Brown
banner
jedbrown.org
Jed Brown
@jedbrown.org
Prof developing fast algorithms, reliable software, and healthy communities for computational science. Opinions my own. https://hachyderm.io/@jedbrown

https://PhyPID.org | aspiring killjoy | against epistemicide | he/they
Reposted by Jed Brown
EAs love malaria nets because it's supposedly the intervention where we have the "most statistical evidence" of it's effectiveness. One kind of silly fact about this tho is that if you read Duflo's actual paper, the choice to do the experiments on bed nets as an intervention is pretty much arbitrary
can someone who's not a weirdo catch me up on why effective altruists are uniquely obsessed with malaria nets?
December 26, 2025 at 4:21 PM
Reposted by Jed Brown
It is EXHAUSTING not only being made responsible for coming up with new kinds of assignments for our students; it's also tedious reading op-eds that suggest the core problem is a crisis in teaching. But, as Chris and I lay out here, this isn't a crisis in teaching; it's an attack on learning.
"We envision a resistance that is...a repudiation of the efficiencies that automated algorithmic education falsely promises: a resistance comprising the collective force of small acts of friction."

"How to Resist AI in Education" by me & @cnygren.bsky.social
www.publicbooks.org/four-frictio...
Four Frictions: or, How to Resist AI in Education - Public Books
We are calling for resistance to the AI industry’s ongoing capture of higher education.
www.publicbooks.org
December 24, 2025 at 7:39 PM
Reposted by Jed Brown
Just like Klarna who fired many people because they thought AI can solve everything, the hype they swallowed was met with the reality that it just doesn't work well.

Same lesson different company. As the AI hype and bubble pop, we will see more of these.
timesofindia.indiatimes.com/technology/t...
After laying off 4,000 employees and automating with AI agents, Salesforce executives admit: We were more confident about…. - The Times of India
Tech News News: Salesforce, one of the world's most valuable enterprise software companies, is pulling back from its heavy reliance on large language models after enc.
timesofindia.indiatimes.com
December 23, 2025 at 12:05 PM
Reposted by Jed Brown
I read a thread a few months ago (apols to the OP) pointing out that MOOCs succeeded insofar as institutions now often claim instructor IP; lectures are recorded; classes are modularised, outcome-focused & 'supported' by generic 'help' resources... MOOC thinking accelerated HE neoliberalism.
December 24, 2025 at 4:13 AM
Reposted by Jed Brown
Thanks for sharing this document, @rweingarten.bsky.social.

I read it, and I believe it's inexcusably inadequate and will not prepare teachers or schools to confront the very real dangers of the "A.i." products pushed by your partners, including OpenAI.

A few thoughts for your consideration...
Read about Commonsense Guardrails for Using Advanced Technology in Schools aiinstruction.org/sites/defaul...
aiinstruction.org
December 24, 2025 at 3:43 AM
If a software product interacs in a way that imitates a human, the law should consider it an agent of the company just like a human employee, with the people who deploy it held accountable in the same way.
December 23, 2025 at 10:40 PM
Reposted by Jed Brown
I don’t really see the contradiction, given that those big raises are often predicted in doing exactly all those things. The board wasn’t duped. They got what they wanted.
While procuring big raises for themselves, these presidents were dismantling entire departments, firing faculty en masse, and signing expensive contracts with the epistemicide company in defiance of shared governance. Abject betrayal of the university mission.
www.currentaffairs.org/news/ai-is-d...
December 23, 2025 at 3:19 PM
Reposted by Jed Brown
Things that conservative students have said to me at the uni of Oklahoma
Hitler was misunderstood/his economic genius downplayed.
We should be using data from Nazi medical experiments.
‘Reverse racism’ is worse than racism.
Enslaved people were ‘well taken care of’
This has been building for so long
December 23, 2025 at 1:28 PM
Reposted by Jed Brown
You may have heard that Trump’s extortion of University of California was defeated in court. But do you know who won this historic case? Spoiler: not a single UC administrator participated. It was all faculty members of the UC Faculty Associations and @aaup.org! Cc: @veenadubal.bsky.social
Behind the Scenes: How UC Faculty Beat Back Trump's Attacks
YouTube video by UC Faculty
youtu.be
December 5, 2025 at 7:15 AM
Excellent thread, with parallels everywhere upsides are claimed. For example, we are told that climate change will be disastrous if "AI" doesn't find a magical third way in which no existing power structures are challenged. Those power structures are the problem and this is their preferred framing.
Treating AI as the source of improved humanity shifts attention away from the systems that create pressure in the first place. Emotional strain is not a problem that computation resolves, it is a symptom of environments that value speed over reflection and output over rest. 6/*
December 22, 2025 at 8:32 PM
Good thread. I've been using this slide for those who insist on a technical definition for AI/ML in science. Coming from the perspective of scientific/numerical computing, the colloquial meaning does a lot to exclude (good, "classical") methods. It's deeply frustrating and constraining.
December 22, 2025 at 7:59 PM
Sure, Flock is fundamentally unserious about security and makes false statements about who has access and sure, they're able to read what's on your phone when you walk near their cameras, but they also have patents to automate racial profiling.
patentimages.storage.googleapis.com/77/9a/03/7b3...
December 22, 2025 at 5:31 PM
Reposted by Jed Brown
100% this.

Here is something I wrote that is along simliar lines from earlier this year:

buttondown.com/maiht3k/arch...
December 22, 2025 at 7:12 AM
Reposted by Jed Brown
we are strictly zero tolerance absolutely forbidden to use genAI at work. we have large caseload and a specialized area. Know what we do other than use resources on the intranet? We are constantly on teams asking the brain trust of our department.
December 22, 2025 at 4:14 AM
This. In the past several years, many have branded their work "AI" for the financial benefits, despite the intense toxicity of the people and tacit assumptions at the center. As communities become conversant in critique and refusal, fragility is on full display as researchers cry out "not all AI". 🧵
That's not possible anywhere because of the bubble. The AI topic is toxic because so many of the key players are toxic.
December 22, 2025 at 3:39 AM
Reposted by Jed Brown
Oh wow this is a new low, arguably more insidious and damaging to the integrity of human knowledge than creating fake sources (which at least can be exposed). Reading & judging the quality of the cited support for your claims is the difference between trustworthy research and cherry-picked bullshit
It is not "attribution and sourcing" to generate post-hoc citations that have not been read and did not inform the student's writing. Those should be regarded as fraudulent: artifacts testifying to human actions and thought that did not occur.
www.theverge.com/news/760508/...
December 21, 2025 at 11:31 AM
Reposted by Jed Brown
While we're rightly concerned about citing invented AI publications, wider issues underpinning it need consideration
- poor training on how to find, read, appraise, synthesise literature
- prioritising journals over other sources
- pressure to rush search/review process
- competition
- metrification
I’m sorry, but it is disgraceful to be an academic who uses this technology to conduct research. It should be prohibited in all of our scholarly institutions, including universities and journals.
December 20, 2025 at 3:57 PM
Reposted by Jed Brown
Once, a respected person in my field shared a link to one of my articles, alongside enthusiastic agreement with a "quote" from me that does not appear anywhere within the article, or elsewhere in my corpus.

Frequently, others yell at me for "saying" things that likewise do not appear in the text.
There are always empty words about "human in the loop", that you should always check the work of an LLM. But the entire value of a summary is as a replacement for reading. Nobody uses an LLM to generate a "summary" *and* independently verifies.
December 21, 2025 at 5:44 AM
Reposted by Jed Brown
Really enjoyed this fantastic piece by @sonjadrimmer.bsky.social and @cnygren.bsky.social

Here are few favorite pull quotes of mine + one little quibble at the end:
"We envision a resistance that is...a repudiation of the efficiencies that automated algorithmic education falsely promises: a resistance comprising the collective force of small acts of friction."

"How to Resist AI in Education" by me & @cnygren.bsky.social
www.publicbooks.org/four-frictio...
Four Frictions: or, How to Resist AI in Education - Public Books
We are calling for resistance to the AI industry’s ongoing capture of higher education.
www.publicbooks.org
December 21, 2025 at 1:02 AM
Reposted by Jed Brown
I’m pretty sure a big problem is that people making these decisions don’t understand at all how LLMs work.

There are a LOT of people in expert professions who bluff.

The problem is that people good at bluffing impress others and are also motivated to bluff.

So they tend to get higher positions.
Professional societies keep beclowning themselves buying into a lie about what an LLM "summary" is. They are inherently counterfeit: not an epistemic product of the ideas in the source, but summary-shaped text linguistically based on *other* works (in the training corpus) that use related language.
This is one of the reasons I remain horrified by seeing @historians.org suggest "ways to use gAI" that included this:
December 21, 2025 at 3:46 AM
Reposted by Jed Brown
"I only use it for summaries" is the new six word tragedy
Professional societies keep beclowning themselves buying into a lie about what an LLM "summary" is. They are inherently counterfeit: not an epistemic product of the ideas in the source, but summary-shaped text linguistically based on *other* works (in the training corpus) that use related language.
This is one of the reasons I remain horrified by seeing @historians.org suggest "ways to use gAI" that included this:
December 21, 2025 at 3:30 AM
Professional societies keep beclowning themselves buying into a lie about what an LLM "summary" is. They are inherently counterfeit: not an epistemic product of the ideas in the source, but summary-shaped text linguistically based on *other* works (in the training corpus) that use related language.
This is one of the reasons I remain horrified by seeing @historians.org suggest "ways to use gAI" that included this:
December 21, 2025 at 3:23 AM
And the best part is we don't need any new rules to do that. We just need to treat it as what it so transparently is: nonconsensual ghost authorship in a blender. Institutions and publishers already agree that ghost authorship is misconduct and have policies to address it.
I’m sorry, but it is disgraceful to be an academic who uses this technology to conduct research. It should be prohibited in all of our scholarly institutions, including universities and journals.
December 20, 2025 at 3:07 PM
To record or provide such data would violate the ALA Library Bill of Rights. It's remarkable in today's surveilled society to read the 🔥 2002 Tattered Cover decision in which the CO Supreme Court denied a search warrant for bookstore records of a specific individual.

www.aclu-co.org/news/aclu-ce...
December 20, 2025 at 4:46 AM
Universities are dying of FOMO, eager to normalize using the Academic Misconduct Machine. This epistemicide is not inevitable. We can, and must, engage in the radical act of refusing to use the Academic Misconduct Machine.
Closing out my year with a journal editor shocker 🧵

Checking new manuscripts today I reviewed a paper attributing 2 papers to me I did not write. A daft thing for an author to do of course. But intrigued I web searched up one of the titles and that's when it got real weird...
December 19, 2025 at 6:48 PM