Eugene Vinitsky 🍒
eugenevinitsky.bsky.social
Eugene Vinitsky 🍒
@eugenevinitsky.bsky.social
Anti-cynic. Towards a weirder future. Reinforcement Learning, Autonomous Vehicles, transportation systems, the works. Asst. Prof at NYU
https://emerge-lab.github.io
https://www.admonymous.co/eugenevinitsky
Reposted by Eugene Vinitsky 🍒
Alright in the interest of creating the kind of discussion/posting I want to see on this site, I'm gonna share something that I learned in the past few weeks. Not new, not mind blowing, but I was reading the VAE and VQ-VAE papers this winter break for some projects so here we go 🧵
January 6, 2026 at 3:18 AM
This website has been an experiment run by Harvard to find the one pure person who is allowed to post. The experiment is concluded, thank you for participating.

The person is @gracekind.net
January 6, 2026 at 2:43 AM
While we also hype the moderation features, I'm really excited about the paper discovery tools @mariaa.bsky.social and I are starting to build. Open social means we can bootstrap onto existing discussions that are happening
January 6, 2026 at 12:58 AM
one of the funniest things about this feature we're building is that everyone will prompt use it to block me
That said, we know people want more safety tools, so @eugenevinitsky.bsky.social and I are building them!

Here's what that looks like in our client. My post got a lot of interactions and then a big account reposted it, so alerts showed up and I have tools to deal with it.

Give us more ideas!
January 6, 2026 at 12:55 AM
There are a lot of features we can and will build. But photos in DMs and group chats are way outside of scope and so…would really love if bluesky could prioritize that
January 5, 2026 at 11:33 PM
Reposted by Eugene Vinitsky 🍒
That said, we know people want more safety tools, so @eugenevinitsky.bsky.social and I are building them!

Here's what that looks like in our client. My post got a lot of interactions and then a big account reposted it, so alerts showed up and I have tools to deal with it.

Give us more ideas!
January 5, 2026 at 9:59 PM
will do the same!
I'm going to start doing this every month to help new users dipping their toes in. If you post about AI research here, like this post. I'll follow you.

(Like even if I already follow you, to help others find you too.)
January 5, 2026 at 9:32 PM
testing firehose capture of this excellent post
What can cognitive science learn from AI? In infinitefaculty.substack.com/p/what-cogni... I outline how AI has found that scale and richness of learning experiences fundamentally change learning & generalization — and how I believe we should rethink cognitive experiments & theories in response.
What cognitive science can learn from AI
#3 in a series on cognitive science and AI
infinitefaculty.substack.com
January 5, 2026 at 9:28 PM
Reposted by Eugene Vinitsky 🍒
Bluesky is great. I've met multiple collaborators and friends through this site, I'm overwhelmed by interesting papers to read, when I post technical questions I get many informative responses. I've blocked an occasional weird person but then they're gone and I don't care.
January 5, 2026 at 9:04 PM
People are misunderstanding what happened here. They mostly did not go back to twitter. They went to linkedin or substack or threads nowhere and so interesting technical conversations simply didn't happen that could have.
Lots of people tried to reconstruct the ML/AI/stats community over here. 90% of them left because the random abuse they got for just posting ordinary research results was too much.
bsky.app/profile/jwhe...
Wrote about an obvious and yet profoundly underappreciated aspect of the AI boom: its total narrative capture by Elon Musk's X nymag.com/intelligence...
January 5, 2026 at 8:55 PM
Reposted by Eugene Vinitsky 🍒
Do reasoning models have real “Aha!” moments—mid-chain realizations where they intrinsically self-correct?

In a new pre-print, “The Illusion of Insight in Reasoning Models," led by @liv-daliberti.bsky.social we provide strong evidence that they do not!

📜: arxiv.org/abs/2601.00514
January 5, 2026 at 7:39 PM
Testing with this excellent post.
The final post of 2025, where I propose the "paradox of highly optimized tolerance" (with nods to Carlson-Doyle and to Popper) as a frame for thinking about the impact of AI.

Thank you all for reading, subscribing, and commenting! Happy New Year!
The Paradox of Highly Optimized Tolerance
There really is no antimemetics division.
realizable.substack.com
January 5, 2026 at 8:09 PM
Oh man, filtering for papers was easy. Filtering for blog posts is surfacing so much anti-vax stuff
January 5, 2026 at 7:55 PM
Would it be useful or terrible to build a pile-on simulator so people could better understand what it feels like...?
January 5, 2026 at 7:25 PM
Reposted by Eugene Vinitsky 🍒
I'm teaching "Intro to NLP" this semester and am finalizing which topics to include. I haven't taught or taken this class in a decade! Which topics are you including in your NLP courses these days?

I'm curating a small list of similar courses here: www.are.na/maria-antoni...
January 5, 2026 at 6:06 PM
I want the AI firefox features. So, there you go, you can't say no one wants it anymore
January 5, 2026 at 5:27 PM
When this paper title showed up as trending I thought "this must be a bug" but no, just a great paper title:
arxiv.org/abs/2601.00058
$2+2=4$
Motivated by the observation that $2+2=4$, we consider four-dimensional $\mathcal{N}=2$ superconformal field theories on $S^2\timesΣ$, turning on a suitable rigid supergravity background. On the one h...
arxiv.org
January 5, 2026 at 2:16 PM
Reposted by Eugene Vinitsky 🍒
Yeah I think learning the kernel trick is a good part of learning the intuition that everything in representational geometry is about similarities between datapoints, with the one massive degree of freedom available to analysis is how we define the kernel.
January 5, 2026 at 6:25 AM
Right now our client (out in a bit) successfully picks up and helps navigate papers posted on this site. I'd like to extend it to pick up interesting technical blogs but am sort of unsure how to filter for that. Thoughts from those in the know?
January 5, 2026 at 2:10 PM
Remember, no one has ever been convinced through discussion or argument with others. Relatedly, when I close my eyes, you cease to exist
January 5, 2026 at 4:54 AM
Truly this site hosts the most controversial of opinions
I feel like it is far more important to teach SVMs, and the machinery around them, in basic ML courses now than it was in 2012.
January 5, 2026 at 4:19 AM
Fun fact: did you know that if you repost a paper *once* it instantly makes it into the top 10 most discussed papers on this website.
January 5, 2026 at 2:34 AM
will get over it, will find other things to do, but I actually really liked writing lots of code
January 5, 2026 at 1:45 AM