NDIF Team
banner
ndif-team.bsky.social
NDIF Team
@ndif-team.bsky.social
The National Deep Inference Fabric, an NSF-funded computational infrastructure to enable research on large-scale Artificial Intelligence.

🔗 NDIF: https://ndif.us
🧰 NNsight API: https://nnsight.net
😸 GitHub: https://github.com/ndif-team/nnsight
👀 More advanced interpretability tools coming soon. What techniques would you like to see? Reach out or drop suggestions in the form.
October 10, 2025 at 5:36 PM
This is a public beta, so we expect bugs and actively want your feedback: forms.gle/WsxmZikeLNw3...
NDIF Workbench Feedback
Thank you for taking the time to submit your feedback! Every little bit helps.
forms.gle
October 10, 2025 at 5:36 PM
Study any NDIF-hosted model (including Llama 405B) directly in your browser. Our first tool, Logit Lens, lets you peer inside LLM computations layer-by-layer. Watch the full demo on YouTube (www.youtube.com/watch?v=BK-q...) or try it yourself: workbench.ndif.us
Workbench Logit Lens Demo
YouTube video by NDIF Team
www.youtube.com
October 10, 2025 at 5:36 PM
👀 More advanced interpretability tools coming soon. What techniques would you like to see? Reach out or drop suggestions in the form.
October 10, 2025 at 5:34 PM
This is a public beta, so we expect bugs and actively want your feedback: forms.gle/WsxmZikeLNw...
NDIF Workbench Feedback
Thank you for taking the time to submit your feedback! Every little bit helps.
docs.google.com
October 10, 2025 at 5:34 PM
Read the paper or play around with some demos on the project website!

ArXiv: arxiv.org/abs/2410.22366
Project Website: sdxl-unbox.epfl.ch/
October 3, 2025 at 6:45 PM
Participants will:

1. Be in the first cohort of users to access models beyond our whitelist
2. Directly control which models are hosted on the NDIF backend
3. Receive guided support on their project from the NDIF team
4. Give feedback, guiding future user experience
September 4, 2025 at 12:41 AM
This fall, we are running a program to test our model hot-swapping on real research projects. Projects should require internal access to multiple models, which could include model checkpoints, different model sizes, unique model architectures, or other creative approaches.
September 4, 2025 at 12:41 AM
We will use this channel to post lectures on AI interpretability research, educational information, NDIF and NNsight updates, and more. If you're interested in collaborating on a video or would like to suggest a topic, please reach out!
August 7, 2025 at 5:36 PM
Want to try it for yourself? Check out our new mini-paper tutorial in NNsight to see how intervening on concept induction heads can reveal language-invariant concepts and cause a model to paraphrase text!

🔗 nnsight.net/notebooks/m...
August 5, 2025 at 4:31 PM
Using causal mediation analysis on words that span multiple tokens, @sfeucht.bsky.social et al. found concept induction heads that are separate from token induction heads.

🔗 dualroute.baulab.info/
August 5, 2025 at 4:31 PM