Neil Traft
ntraft.bsky.social
Neil Traft
@ntraft.bsky.social
PhD student at the Vermont Complex Systems Institute. Interested in ML, evolution, self-organization, & collective intelligence.

http://ntraft.com
https://t.co/ja3fdtRLdM
Reposted by Neil Traft
What if instead of buying software from a store, you could grow it in your garden?
November 5, 2025 at 9:43 PM
What an incredible tool! I think I'll be returning to this a lot over the next month!

The most interesting papers might be the ones that seem misclassified or out of place...
Inside NeurIPS 2025: The Year’s AI Research, Mapped

New blog post!

NeurIPS 2025 papers are out—and it’s a lot to take in. This visualization lets you explore the entire research landscape interactively, with clusters and
@cohere.com LLM-generated explanations that make it easier to grasp.
November 4, 2025 at 2:06 AM
It's too easy to write garbage survey papers and get lots of citations. Over time, I've learned to ignore most survey papers and recognize some markers of quality.
ArXiv has not had an explicit policy on survey papers. In CS we have recently adopted the same policy of requiring successful peer review prior to release. This is because we are inundated with LLM slop surveys intended to boost citation counts.
November 2, 2025 at 12:03 AM
Reposted by Neil Traft
The blog post is available: blog.arxiv.org/2025/10/31/a...
November 1, 2025 at 5:06 PM
Reposted by Neil Traft
It's been 8 months since I notified @springernature.com of a clear-cut case of fake references in a book chapter which they published. They have still not taken any action as they claim to still be investigating.

I will not review for Springer again until this matter is satisfactorily resolved.
October 21, 2025 at 11:27 AM
Reposted by Neil Traft
UBC’s department of Computer Science invites applications for up to 2 full-time tenure-track positions. The department is particularly interested in researchers in: visualization, robotics, reinforcement learning, data management, and data mining. @cs.ubc.ca

science.ubc.ca/about/careers
October 21, 2025 at 5:07 PM
Translation: You have no new messages.
October 17, 2025 at 12:51 AM
Reposted by Neil Traft
This year's ALICE guest speakers 🧑‍🔬

- Angel Goñi-Moreno - @angelgm.bsky.social
- Alyssa Adams - @alyssa-m-adams.bsky.social
- Alexander Mordvintsev
- Eric Medvet - @ericmedvetts.bsky.social
- Kyrre Glette - @kyrre2000.bsky.social
- Stefano Nichele - @stenichele.bsky.social
- Susan Stepney
October 13, 2025 at 10:06 AM
"Consider another human difference: financial solvency, which can be measured and quantified, just like IQ. It is heritable in twin studies and (less so) in SNPs. But would you bet that sooner or later we are going to know 'what’s going on' with your bank account at the level of genes?" Same for IQ.
In 2018, Charles Murray challenged me to a bet: "We will understand IQ genetically—I think most of the picture will have been filled in by 2025—there will still be blanks—but we’ll know basically what’s going on." It's now 2025, and I claim a win. I write about it in The Atlantic.
Your Genes Are Simply Not Enough to Explain How Smart You Are
Seven years ago, I took a bet with Charles Murray about whether we’d basically understand the genetics of intelligence by now.
www.theatlantic.com
October 13, 2025 at 5:22 PM
GPT now has ads. 💀

They are voluntary... for now.
September 28, 2025 at 6:31 AM
Reposted by Neil Traft
Q. Who aligns the aligners?
A. alignmentalignment.ai

Today I’m humbled to announce an epoch-defining event: the launch of the 𝗖𝗲𝗻𝘁𝗲𝗿 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗔𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁 𝗼𝗳 𝗔𝗜 𝗔𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁 𝗖𝗲𝗻𝘁𝗲𝗿𝘀.
Center for the Alignment of AI Alignment Centers
We align the aligners
alignmentalignment.ai
September 11, 2025 at 1:17 PM
At #AutoML25, Dr. Manuela Veloso calls for 3 major directions for AI research:

1) AI must know what it doesn't know (uncertainty).
2) AI must be able to continually improve (continual learning).
3) AI systems must include humans at all times to be more robust (automation is always only partial).
September 8, 2025 at 2:32 PM
Reposted by Neil Traft
No mires pabajo
August 17, 2025 at 9:42 PM
Merely a day after I call out OpenAI for being so closed (bsky.app/profile/ntra...), they release an open weights GPT.

Glad they're finally reading my skeets! 😏
gpt-oss, OpenAI's open weights model

120B & 20B variants, both MoE with 4 experts active

openai.com/index/introd...
August 6, 2025 at 1:03 PM
With its focus on short term profitability, the US has forgotten how to make markets competitive. And companies have forgotten that they are only *vehicles* for the final product, not ends in themselves.
Andrew Ng’s piece on 🇺🇸 vs 🇨🇳 competition in AI worth reading:

Full article: www.deeplearning.ai/the-batch/is...
August 3, 2025 at 4:23 PM
Reposted by Neil Traft
I’m very excited to announce that I’ve just signed a contract with @princetonupress.bsky.social for a new book, tentatively titled “The Genomic Code” 📖 😊
kermit the frog is using a typewriter in a messy room .
ALT: kermit the frog is using a typewriter in a messy room .
media.tenor.com
August 1, 2025 at 5:10 PM
In reality, all environments are only partially observable, so this is the only regime in which RL should be evaluated.
August 3, 2025 at 12:42 AM
Send like a useful framework. Looking forward to reading this new position paper.
I split AI into 3 non-mutually exclusive types (see Table 1 above): displacement (harmful), enhancement (beneficial), and/or replacement (neutral) of human cognitive labour. More later possibly, but see Tables 2 to 4 (attached or here: arxiv.org/pdf/2507.19960) for the worked through examples. 2/n
July 30, 2025 at 12:31 AM
I just realized that all these hyped-up “AI scientist” concepts all concentrate on the idea of *discovery*—new algorithms, new materials, new products—without exception. Not a one of them focus on *understanding*—arguably the larger role of a scientist! 🧐
July 29, 2025 at 5:58 PM
Reposted by Neil Traft
Proud of our team's talks and posters at #IC2S2 in Norrköping Sweden - what a great week. We're excited to host next year in Vermont!
July 25, 2025 at 11:13 AM
Reposted by Neil Traft
We are going to spend the next few years finding out exactly *why* it was a horrible idea to unleash a tsunami of vibe-coded apps made by idiots and scammers on an unprepared world.

Welcome to the Entirely Foreseeable AI Consequences Era.
Women Dating Safety App 'Tea' Breached, Users' IDs Posted to 4chan
“DRIVERS LICENSES AND FACE PICS! GET THE FUCK IN HERE BEFORE THEY SHUT IT DOWN!” the thread read before being deleted.
www.404media.co
July 26, 2025 at 1:46 AM
Apparently Papers With Code was abruptly sunsetted for reasons that are unclear. 😢
😞 So sad to see paperswithcode is discontinued – but grateful that the @hf.co team is, as always, stepping up to support the community!

It was an incredible resource for use cases, common themes in papers, and visualizing how models have improved on evals over time:

huggingface.co/papers/trend...
Trending Papers - Hugging Face
Your daily dose of AI research from AK
huggingface.co
July 26, 2025 at 11:33 AM
Reposted by Neil Traft
😞 So sad to see paperswithcode is discontinued – but grateful that the @hf.co team is, as always, stepping up to support the community!

It was an incredible resource for use cases, common themes in papers, and visualizing how models have improved on evals over time:

huggingface.co/papers/trend...
Trending Papers - Hugging Face
Your daily dose of AI research from AK
huggingface.co
July 25, 2025 at 12:57 PM
Been studying Mixture of Experts recently. It's baffling to learn that typically different tokens are routed to *different* experts. How can you be an "expert" at a particular word?? I suspect this name lends us very poor intuition for what these models actually do!

(shows 2 tokens and 4 "experts")
July 25, 2025 at 12:56 PM
Reposted by Neil Traft
There are people, in tech (and now in the government!), who will mislead you about what current AI models are capable of. If we don't call them out, they'll drag us all down.
Reporter: The FDA has a new AI tool that's intended to speed up drug approvals. But several FDA employees say the new AI helper is making up studies that do not exist. One FDA employee telling us, 'Anything that you don't have time to double check is unreliable. It hallucinates confidently'
July 23, 2025 at 8:01 PM