Sweta Karlekar
swetakar.bsky.social
Sweta Karlekar
@swetakar.bsky.social
Machine learning PhD student @ Blei Lab in Columbia University

Working in mechanistic interpretability, nlp, causal inference, and probabilistic modeling!

Previously at Meta for ~3 years on the Bayesian Modeling & Generative AI teams.

🔗 www.sweta.dev
For those in NYC working in AI, ML-NYC is a free monthly speaker series co-organized by the Flatiron Institute, Columbia, and NYU. Past speakers include Bin Yu, Christos Papadimitriou, Léon Bottou (and many more). Talks are followed by a catered reception.

Join us Feb 11th @ 4pm for Romain Lopez!
The ML in NYC Speaker Series + Happy Hour is excited to host Professor Romain Lopez as our February speaker as he discusses deep generative models and their applications to the field of cellular and molecular biology!

www.eventbrite.com/e/ml-nyc-spe...
ML-NYC Speaker Series and Happy Hour: Romain Lopez
Learning from Millions of Cells with Deep Generative Models
www.eventbrite.com
January 31, 2026 at 4:17 PM
Reposted by Sweta Karlekar
Excited to highlight recent work from the lab at NeurIPS! If you’re interested in understanding why uncertainty estimates often break under distribution shift — and how we can do better — check out Yuli’s poster tomorrow.
Uncertainty estimation fails under distribution shifts. Why? Partly because in stats, even Bayesian stats, we treat x as given. But intuitively data makes different models plausible. For reliable uncertainty, we need to account for it explicitly. Come chat with me about it tomorrow at my poster
NeurIPS Poster Quantifying Uncertainty in the Presence of Distribution ShiftsNeurIPS 2025
neurips.cc
December 3, 2025 at 5:08 PM
The ML in NYC Speaker Series + Happy Hour is excited to host Professor Daniel Björkegren as our December speaker as he speaks about AI for Low-Income Countries!

Registration: www.eventbrite.com/e/ml-nyc-spe...
ML-NYC Speaker Series and Happy Hour: Daniel Björkegren
AI for Low-Income Countries
www.eventbrite.com
December 2, 2025 at 6:21 PM
Reposted by Sweta Karlekar
Hello!

We will be presenting Estimating the Hallucination Rate of Generative AI at NeurIPS. Come if you'd like to chat about epistemic uncertainty for In-Context Learning, or uncertainty more generally. :)

Location: East Exhibit Hall A-C #2703
Time: Friday @ 4:30
Paper: arxiv.org/abs/2406.07457
December 12, 2024 at 6:13 PM
Reposted by Sweta Karlekar
fun @bleilab.bsky.social x oatml collab

come chat with Nicolas , @swetakar.bsky.social , Quentin , Jannik , and i today
Hello!

We will be presenting Estimating the Hallucination Rate of Generative AI at NeurIPS. Come if you'd like to chat about epistemic uncertainty for In-Context Learning, or uncertainty more generally. :)

Location: East Exhibit Hall A-C #2703
Time: Friday @ 4:30
Paper: arxiv.org/abs/2406.07457
December 13, 2024 at 5:26 PM
Reposted by Sweta Karlekar
Check out our new paper from the Blei Lab on probabilistic predictions with conditional diffusions and gradient boosted trees! #Neurips2024
I am very excited to share our new Neurips 2024 paper + package, Treeffuser! 🌳 We combine gradient-boosted trees with diffusion models for fast, flexible probabilistic predictions and well-calibrated uncertainty.

paper: arxiv.org/abs/2406.07658
repo: github.com/blei-lab/tre...

🧵(1/8)
December 2, 2024 at 11:02 PM
Reposted by Sweta Karlekar
Check out our new paper about hypothesis testing the circuit hypothesis in LLMs! This work previously won a top paper award at the ICML mechanistic interpretability workshop, and we’re excited to share it at #Neurips2024
The circuit hypothesis proposes that LLM capabilities emerge from small subnetworks within the model. But how can we actually test this? 🤔

joint work with @velezbeltran.bsky.social @maggiemakar.bsky.social @anndvision.bsky.social @bleilab.bsky.social Adria @far.ai Achille and Caro
December 10, 2024 at 7:07 PM
Reposted by Sweta Karlekar
For anyone interested in fine-tuning or aligning LLMs, I’m running this free and open course called smol course. It’s not a big deal, it’s just smol.

🧵>>
December 3, 2024 at 9:21 AM
Very happy to share some recent work by my colleagues @velezbeltran.bsky.social, @aagrande.bsky.social and @anazaret.bsky.social! Check out their work on tree-based diffusion models (especially the website—it’s quite superb 😊)!
I am very excited to share our new Neurips 2024 paper + package, Treeffuser! 🌳 We combine gradient-boosted trees with diffusion models for fast, flexible probabilistic predictions and well-calibrated uncertainty.

paper: arxiv.org/abs/2406.07658
repo: github.com/blei-lab/tre...

🧵(1/8)
December 2, 2024 at 10:49 PM
Just learned about @andrewyng.bsky.social's new tool, aisuite (github.com/andrewyng/ai...) and wanted to share! It's a standardized wrapper around chat completions that lets you easily switch between querying different LLM providers, including OpenAI, Anthropic, Mistral, HuggingFace, Ollama, etc.
GitHub - andrewyng/aisuite: Simple, unified interface to multiple Generative AI providers
Simple, unified interface to multiple Generative AI providers - GitHub - andrewyng/aisuite: Simple, unified interface to multiple Generative AI providers
github.com
November 29, 2024 at 8:25 PM
Reposted by Sweta Karlekar
Test of Time Paper Awards are out! 2014 was a wonderful year with lots of amazing papers. That's why, we decided to highlight two papers: GANs (@ian-goodfellow.bsky.social et al.) and Seq2Seq (Sutskever et al.). Both papers will be presented in person 😍

Link: blog.neurips.cc/2024/11/27/a...
Announcing the NeurIPS 2024 Test of Time Paper Awards  – NeurIPS Blog
blog.neurips.cc
November 27, 2024 at 3:48 PM
Sorry John, that isn’t my area of expertise!
November 25, 2024 at 12:44 AM
This is very interesting! Do you have any intuition as to whether or not this phenomenon happens only with very simple “reasoning” steps? Does relying on retrieval increase as you progress from simple math to more advanced prompts like GSM8K or adversarially designed prompts (like adding noise)?
November 24, 2024 at 4:29 PM
Reposted by Sweta Karlekar
The Gini coefficient is the standard way to measure inequality, but what does it mean, concretely? I made a little visualization to build intuition:
www.bewitched.com/demo/gini
November 23, 2024 at 3:31 PM
Reposted by Sweta Karlekar
Interested in machine learning in science?

Timo and I recently published a book, and even if you are not a scientist, you'll find useful overviews of topics like causality and robustness.

The best part is that you can read it for free: ml-science-book.com
November 15, 2024 at 9:46 AM
Learning doesn’t have to mean explicit weight changes; ICL can be viewed as temporary implicit finetuning (arxiv.org/abs/2212.10559) or like a “state” change to the model instead of a weight change, akin to how learning happens in fast RL vs slow RL (www.cell.com/trends/cogni...).
Why Can GPT Learn In-Context? Language Models Implicitly Perform Gradient Descent as Meta-Optimizers
Large pretrained language models have shown surprising in-context learning (ICL) ability. With a few demonstration input-label pairs, they can predict the label for an unseen input without parameter u...
arxiv.org
November 22, 2024 at 10:31 PM
Reposted by Sweta Karlekar
new paper from Anthropic on LLM evaluation recommendations

www.anthropic.com/research/sta...
A statistical approach to model evaluations
A research paper from Anthropic on how to apply statistics to improve language model evaluations
www.anthropic.com
November 22, 2024 at 12:47 PM
Reposted by Sweta Karlekar
Just realized BlueSky allows sharing valuable stuff cause it doesn't punish links. 🤩

Let's start with "What are embeddings" by @vickiboykis.com

The book is a great summary of embeddings, from history to modern approaches.

The best part: it's free.

Link: vickiboykis.com/what_are_emb...
November 22, 2024 at 11:13 AM
(Shameless) plug for David Blei's lab at Columbia University! People in the lab currently work on a variety of topics, including probabilistic machine learning, Bayesian stats, mechanistic interpretability, causal inference and NLP.

Please give us a follow! @bleilab.bsky.social
November 20, 2024 at 8:42 PM
Hi! Our lab does Bayesian stuff :) Could you add Dave Blei's lab to this pack as well if it's not already full? @bleilab.bsky.social
November 20, 2024 at 3:38 PM
Could you add Dave Blei's lab to this pack as well if it's not already full? @bleilab.bsky.social
November 20, 2024 at 3:37 PM
Could you add Dave Blei's lab to this pack as well if it's not already full? @bleilab.bsky.social
November 20, 2024 at 3:36 PM
Could you add Dave blei's lab to this pack as well if it's not already full! @bleilab.bsky.social
November 20, 2024 at 3:36 PM
Reposted by Sweta Karlekar
We created an account for the Blei Lab! Please drop a follow 😊

@bleilab.bsky.social
November 20, 2024 at 3:34 PM