Patrik Reizinger
rpatrik96.bsky.social
Patrik Reizinger
@rpatrik96.bsky.social
PhD student working on understanding why neural nets generalize @MPI Tübingen | ex-Vector | path2phd.substack.com | 🇭🇺 🇪🇺
I had the pleasure of being interviewed on the @i-am-scientist.bsky.social podcast by @lisaschmors.bsky.social and @philipphubert.bsky.social . I talk about how I ended up writing my newsletter (path2phd.substack.com) and the people who supported me all along. Enjoy!
The path to PhD - Advice from a young scientist
I am Scientist · Episode
open.spotify.com
May 2, 2025 at 5:12 PM
Reposted by Patrik Reizinger
So—does this mean theory is doomed, and AI engineering is just a random walk? Not at all!

💡 @rpatrik96.bsky.social @wielandbrendel.bsky.social Randall Balestriero did an amazing job clarifying where theory can help practice—and where practice should inspire theory.

🤝
April 18, 2025 at 2:15 PM
Reposted by Patrik Reizinger
March 4, 2025 at 7:43 PM
Reposted by Patrik Reizinger
How can neural nets extract *interpretable* features from data—& uncover new science?

👉 Discover our mathematical framework tackling this question w/ identifiability theory, compressed sensing, interpretability & geometry!🌐

By @david-klindt.bsky.social @rpatrik96.bsky.social C. O'Neill H Maurer
March 5, 2025 at 2:41 PM
Reposted by Patrik Reizinger
New preprint!🚀

Decoding neural representations is a challenge in neuroscience & AI.

👉 Learn how identifiability theory, compressed sensing & interpretability research -w/ a dash of geometry- can help!

@david-klindt.bsky.social @rpatrik96.bsky.social C. O'Neill H. Maurer @ninamiolane.bsky.social
🔵 New paper! We explore sparse coding, superposition, and the Linear Representation Hypothesis (LRH) through identifiability theory, compressed sensing, and interpretability. If you’re curious about lifting neural reps out of superposition, this might interest you! 🤓
arxiv.org/abs/2503.01824
From superposition to sparse codes: interpretable representations in neural networks
Understanding how information is represented in neural networks is a fundamental challenge in both neuroscience and artificial intelligence. Despite their nonlinear architectures, recent evidence sugg...
arxiv.org
March 5, 2025 at 2:43 PM
Reposted by Patrik Reizinger
🚀 We’re hiring! Join Bernhard Schölkopf & me at @ellisinsttue.bsky.social to push the frontier of #AI in education!

We’re building cutting-edge, open-source AI tutoring models for high-quality, adaptive learning for all pupils with support from the Hector Foundation.

👉 forms.gle/sxvXbJhZSccr...
February 11, 2025 at 4:34 PM
Check out our @neuripsconf.bsky.social spotlight on showing how language models learn to extrapolate and compose language rules.

See you on Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST at poster #2702 in East Hall.

neurips.cc/virtual/2024...
NeurIPS Poster Rule Extrapolation in Language Modeling: A Study of Compositional Generalization on OOD PromptsNeurIPS 2024
neurips.cc
December 13, 2024 at 4:59 PM
Reposted by Patrik Reizinger
Congrats to the recipients of the 2024 ELLIS PhD Award!

Co-winners: @koloskova.bsky.social (efficiency in decentralized learning) & Luigi Gresele (identifiable representation learning)
Runner up: @schwarzjn.bsky.social (sparse parameterizations)

Read more about them: bit.ly/4fg4jkg
#AI #ML
December 12, 2024 at 8:47 AM
Reposted by Patrik Reizinger
1/ Hi all, I am at #NeurIPS2024 and I will be hiring a postdoc in probabilistic machine learning starting asap.

Research interests: amortized, approximate & simulator-based inference, Bayesian optimization, and AI4science.

Get in touch for a chat or come to our posters today 11AM or Friday 11AM!
December 11, 2024 at 4:26 PM
Reposted by Patrik Reizinger
📄 New Paper: "How to Merge Your Multimodal Models Over Time?"

arxiv.org/abs/2412.06712

Model merging assumes all finetuned models are available at once. But what if they need to be created over time?

We study Temporal Model Merging through the TIME framework to find out!

🧵
How to Merge Your Multimodal Models Over Time?
Model merging combines multiple expert models - finetuned from a base foundation model on diverse tasks and domains - into a single, more capable model. However, most existing model merging approaches...
arxiv.org
December 11, 2024 at 6:00 PM
Reposted by Patrik Reizinger
We are looking for a PostDoc on human-AI interaction to work with @mirizilka.bsky.social

The project studies systems used in law enforcement or legal decision-making, when humans act on recommendations/predictions made by ML algorithms.

Email for details.

www.jobs.cam.ac.uk/job/49538/
Research Assistant/Associate in Machine Learning (Fixed Term) - Job Opportunities - University of Cambridge
Research Assistant/Associate in Machine Learning (Fixed Term) in the Department of Engineering at the University of Cambridge.
www.jobs.cam.ac.uk
December 11, 2024 at 8:16 PM
Check out our
@neuripsconf.bsky.social
spotlight on showing a useful inductive bias in language models to extrapolate language rules.

See you on Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST at poster #2702.
Can language models transcend the limitations of training data?

We train LMs on a formal grammar, then prompt them OUTSIDE of this grammar. We find that LMs often extrapolate logical rules and apply them OOD, too. Proof of a useful inductive bias.

Check it out at NeurIPS:

nips.cc/virtual/2024...
NeurIPS Poster Rule Extrapolation in Language Modeling: A Study of Compositional Generalization on OOD PromptsNeurIPS 2024
nips.cc
December 6, 2024 at 6:47 PM
Next week I will be in Berkeley. Reach out if you'd like to talk about OOD/compositional generalization, causality, SSL, or inductive biases.
November 28, 2024 at 7:26 AM
Reposted by Patrik Reizinger
The ✨ML Internship Feed✨ is here!

@serge.belongie.com and I created this feed to compile internship opportunities in AI, ML, CV, NLP, and related areas.

The feed is rule-based. Please help us improve the rules by sharing feedback 🧡

🔗 Link to the feed: bsky.app/profile/did:...
November 22, 2024 at 9:46 PM
Reposted by Patrik Reizinger
Join the Interdisciplinary Postdoc Fellowship Program at the European Molecular Biology Laboratory (EMBL), one of the best places to do research in modern biology and develop your career.

Great opportunities for statisticians, comp. biologists, AI experts, mathem. modelers!
www.embl.org/eipod-linc
November 22, 2024 at 9:02 AM