Sebastian Ahnert
banner
sebastianahnert.bsky.social
Sebastian Ahnert
@sebastianahnert.bsky.social
Physicist with interdisciplinary interests. https://www.tcm.phy.cam.ac.uk/~sea31/
We’re hiring a two-year DH postdoc for our HAVI project, to work on a new type of knowledge graph architecture for the humanities (and beyond). Deadline soon (18 Jan)! www.cam.ac.uk/jobs/researc...
Research Assistant/Research Associate (Fixed Term)
Applications are invited for a full time post doctoral researcher to work on an international collaboration to develop AI-based solutions for research on archival materials as part of the Humanities
www.cam.ac.uk
January 13, 2026 at 10:14 AM
This is such an important point.
The words, the pixels, the sound waves aren't the art. The art is in the experience of the artist and the audience, together.

>>
January 3, 2026 at 10:43 AM
Reposted by Sebastian Ahnert
Brexit has deepened the British economy’s flaws and dulled its strengths. The question is what to do about it econ.st/4qwSix0

Photo: Magnum
January 3, 2026 at 7:00 AM
Reposted by Sebastian Ahnert
Uh… check out what ChatGPT allegedly told a man before he killed his mother and then himself.

storage.courtlistener.com/recap/gov.us...
December 31, 2025 at 6:20 PM
Reposted by Sebastian Ahnert
Scott’s response is perfect. You saved me time writing a thread!
August 29, 2025 at 10:17 AM
Reposted by Sebastian Ahnert
Immensely significant - and worrying.
June 14, 2025 at 7:38 AM
Reposted by Sebastian Ahnert
Far from the whole story but we wrote about the grant proposal contest component a few years ago. Thinking about other elements of your question this summer….

journals.plos.org/plosbiology/...
Contest models highlight inherent inefficiencies of scientific funding competitions
Scientists waste substantial time writing grant proposals, potentially squandering much of the scientific value of funding programs. This Meta-Research Article shows that, unfortunately, grant-proposa...
journals.plos.org
June 14, 2025 at 8:18 AM
Reposted by Sebastian Ahnert
When billion-dollar AIs break down over puzzles a child can do, it's time to rethink the hype | Gary Marcus
When billion-dollar AIs break down over puzzles a child can do, it's time to rethink the hype | Gary Marcus
The tech world is reeling from a paper that shows the powers of a new generation of AI have been wildly oversold, says cognitive scientist Gary Marcus
www.theguardian.com
June 10, 2025 at 11:34 AM
Reposted by Sebastian Ahnert
Worth noting today that the entire budget of the NEH is about $200M.
According to acting DOD Comptroller Bryn McDonnell it'll cost $134M for the deployment of the Guard to Los Angeles.
June 10, 2025 at 3:48 PM
Reposted by Sebastian Ahnert
Our op-ed in the Guardian addresses the danger of Trump's "Gold Standard Science" executive order.

with
@cdelawalla.bsky.social
@vambros.bsky.social
Carol Greider
@michaelemann.bsky.social
@briannosek.bsky.social
Trump’s new ‘gold standard’ rule will destroy American science as we know it | Colette Delawalla
The new executive order allows political appointees to undermine research they oppose, paving the way to state-controlled science
www.theguardian.com
May 29, 2025 at 4:32 PM
Reposted by Sebastian Ahnert
Fascism knows no bounds.

www.nytimes.com/2025/05/22/u...
Trump Administration Halts Harvard’s Ability to Enroll International Students
www.nytimes.com
May 22, 2025 at 5:52 PM
Reposted by Sebastian Ahnert
LLMs are nothing more than models of the distribution of the word forms in their training data, with weights modified by post-training to produce somewhat different distributions. Unless your use case requires a model of a distribution of word forms in text, indeed, they suck and aren't useful.
There are a lot of critiques of LLMs that I agree with but "they suck and aren't useful" doesn't really hold water.

I understand people not using them because of social, economic, and environmental concerns. And I also understand people using them because they can be very useful.

Thoughts?
April 24, 2025 at 4:54 PM
Reposted by Sebastian Ahnert
This is the right thing to do, and it’s a profound shame that he had to do it. www.nytimes.com/2025/04/22/b...
April 22, 2025 at 7:33 PM
What a magnificent piece of writing.
This is an absolute *must read* opinion from Judge Wilkinson on the Fourth Circuit - a very conservative judge - in the Abrego Garcia case.

storage.courtlistener.com/recap/gov.us...
April 17, 2025 at 11:17 PM
Reposted by Sebastian Ahnert
I've been reflecting some more overnight on the For Some Women Scotland case. 🧵
April 17, 2025 at 8:39 AM
Reposted by Sebastian Ahnert
Wooah.
Thirty-eight of 43 experts cut last month from the boards that review the science and research that happens in laboratories at the National Institutes of Health are female, Black or Hispanic, according to an analysis by the chairs of a dozen of the boards.
Women, minorities fired in purge of NIH science review boards
Scientists, with expertise in fields that include mental health, cancer and infectious disease, typically serve five-year terms and were not given a reason for their dismissal.
www.washingtonpost.com
April 16, 2025 at 9:20 PM
Reposted by Sebastian Ahnert
"the president defied a Supreme Court ruling to return a man mistakenly sent to a gulag... and spoke of sending Americans to foreign concentration camps.
This is the beginning of an American policy of state terror, and it has to be identified as such to be stopped"
snyder.substack.com/p/state-terror
State Terror
A brief guide for Americans
snyder.substack.com
April 15, 2025 at 3:02 PM
Reposted by Sebastian Ahnert
In this Op-ed for Scientific American, Asmelash Teka Hadgu and I discuss one of the many reasons the idea of replacing US federal workers with so-called generative AI systems should terrify us. 🧵

www.scientificamerican.com/article/repl...
Replacing Federal Workers with Chatbots Would Be a Dystopian Nightmare
The Trump administration sees an AI-driven federal workforce as more efficient. Instead, with chatbots unable to carry out critical tasks, it would be a diabolical mess
www.scientificamerican.com
April 15, 2025 at 12:32 AM
Reposted by Sebastian Ahnert
1/4. On the White House’s theory, if they abduct you, get you on a helicopter, get to international waters, shoot you in the head, and drop your corpse into the ocean, that is legal, because it is the conduct of foreign affairs.
April 15, 2025 at 1:13 AM
Reposted by Sebastian Ahnert
1. LLM-generated code tries to run code from online software packages. Which is normal but
2. The packages don’t exist. Which would normally cause an error but
3. Nefarious people have made malware under the package names that LLMs make up most often. So
4. Now the LLM code points to malware.
LLMs hallucinating nonexistent software packages with plausible names leads to a new malware vulnerability: "slopsquatting."
LLMs can't stop making up software dependencies and sabotaging everything
: Hallucinated package names fuel 'slopsquatting'
www.theregister.com
April 12, 2025 at 11:43 PM