Badr AlKhamissi
@bkhmsi.bsky.social
PhD at EPFL 🧠💻
Ex @MetaAI, @SonyAI, @Microsoft
Egyptian 🇪🇬
Ex @MetaAI, @SonyAI, @Microsoft
Egyptian 🇪🇬
On my way to #EMNLP2025 🇨🇳
I’ll be presenting our work (Oral) on Nov 5, Special Theme session, Room A106-107 at 14:30.
Let’s talk brains 🧠, machines 🤖, and everything in between :D
Looking forward to all the amazing discussions!
I’ll be presenting our work (Oral) on Nov 5, Special Theme session, Room A106-107 at 14:30.
Let’s talk brains 🧠, machines 🤖, and everything in between :D
Looking forward to all the amazing discussions!
November 2, 2025 at 12:06 PM
On my way to #EMNLP2025 🇨🇳
I’ll be presenting our work (Oral) on Nov 5, Special Theme session, Room A106-107 at 14:30.
Let’s talk brains 🧠, machines 🤖, and everything in between :D
Looking forward to all the amazing discussions!
I’ll be presenting our work (Oral) on Nov 5, Special Theme session, Room A106-107 at 14:30.
Let’s talk brains 🧠, machines 🤖, and everything in between :D
Looking forward to all the amazing discussions!
🚀 Excited to share a major update to our “Mixture of Cognitive Reasoners” (MiCRo) paper!
We ask: What benefits can we unlock by designing language models whose inner structure mirrors the brain’s functional specialization?
More below 🧠👇
cognitive-reasoners.epfl.ch
We ask: What benefits can we unlock by designing language models whose inner structure mirrors the brain’s functional specialization?
More below 🧠👇
cognitive-reasoners.epfl.ch
October 20, 2025 at 12:10 PM
🚀 Excited to share a major update to our “Mixture of Cognitive Reasoners” (MiCRo) paper!
We ask: What benefits can we unlock by designing language models whose inner structure mirrors the brain’s functional specialization?
More below 🧠👇
cognitive-reasoners.epfl.ch
We ask: What benefits can we unlock by designing language models whose inner structure mirrors the brain’s functional specialization?
More below 🧠👇
cognitive-reasoners.epfl.ch
Excited to be part of this cool work led by Melika Honarmand!
We show that by selectively targeting VLM units that mirror the brain’s visual word form area, models develop dyslexic-like reading impairments, while leaving other abilities intact!! 🧠🤖
Details in the 🧵👇
We show that by selectively targeting VLM units that mirror the brain’s visual word form area, models develop dyslexic-like reading impairments, while leaving other abilities intact!! 🧠🤖
Details in the 🧵👇
October 2, 2025 at 1:27 PM
Excited to be part of this cool work led by Melika Honarmand!
We show that by selectively targeting VLM units that mirror the brain’s visual word form area, models develop dyslexic-like reading impairments, while leaving other abilities intact!! 🧠🤖
Details in the 🧵👇
We show that by selectively targeting VLM units that mirror the brain’s visual word form area, models develop dyslexic-like reading impairments, while leaving other abilities intact!! 🧠🤖
Details in the 🧵👇
Now that the ICLR deadline is behind us, happy to share that From Language to Cognition has been accepted as an Oral at #EMNLP2025! 🎉
Looking forward to seeing many of you in Suzhou 🇨🇳
Looking forward to seeing many of you in Suzhou 🇨🇳
🚨 New Preprint!!
LLMs trained on next-word prediction (NWP) show high alignment with brain recordings. But what drives this alignment—linguistic structure or world knowledge? And how does this alignment evolve during training? Our new paper explores these questions. 👇🧵
LLMs trained on next-word prediction (NWP) show high alignment with brain recordings. But what drives this alignment—linguistic structure or world knowledge? And how does this alignment evolve during training? Our new paper explores these questions. 👇🧵
September 25, 2025 at 2:56 PM
Now that the ICLR deadline is behind us, happy to share that From Language to Cognition has been accepted as an Oral at #EMNLP2025! 🎉
Looking forward to seeing many of you in Suzhou 🇨🇳
Looking forward to seeing many of you in Suzhou 🇨🇳
Reposted by Badr AlKhamissi
1/🚨 New preprint
How do #LLMs’ inner features change as they train? Using #crosscoders + a new causal metric, we map when features appear, strengthen, or fade across checkpoints—opening a new lens on training dynamics beyond loss curves & benchmarks.
#interpretability
How do #LLMs’ inner features change as they train? Using #crosscoders + a new causal metric, we map when features appear, strengthen, or fade across checkpoints—opening a new lens on training dynamics beyond loss curves & benchmarks.
#interpretability
September 25, 2025 at 2:02 PM
1/🚨 New preprint
How do #LLMs’ inner features change as they train? Using #crosscoders + a new causal metric, we map when features appear, strengthen, or fade across checkpoints—opening a new lens on training dynamics beyond loss curves & benchmarks.
#interpretability
How do #LLMs’ inner features change as they train? Using #crosscoders + a new causal metric, we map when features appear, strengthen, or fade across checkpoints—opening a new lens on training dynamics beyond loss curves & benchmarks.
#interpretability
Reposted by Badr AlKhamissi
NEW PAPER ALERT: Recent studies have shown that LLMs often lack robustness to distribution shifts in their reasoning. Our paper proposes a new method, AbstRaL, to augment LLMs’ reasoning robustness, by promoting their abstract thinking with granular reinforcement learning.
June 23, 2025 at 2:32 PM
NEW PAPER ALERT: Recent studies have shown that LLMs often lack robustness to distribution shifts in their reasoning. Our paper proposes a new method, AbstRaL, to augment LLMs’ reasoning robustness, by promoting their abstract thinking with granular reinforcement learning.
Reposted by Badr AlKhamissi
Check out @bkhmsi.bsky.social 's great work on mixture-of-expert models that are specialized to represent the behavior of known brain networks.
🚨 New Preprint!!
Thrilled to share with you our latest work: “Mixture of Cognitive Reasoners”, a modular transformer architecture inspired by the brain’s functional networks: language, logic, social reasoning, and world knowledge.
1/ 🧵👇
Thrilled to share with you our latest work: “Mixture of Cognitive Reasoners”, a modular transformer architecture inspired by the brain’s functional networks: language, logic, social reasoning, and world knowledge.
1/ 🧵👇
June 18, 2025 at 10:46 AM
Check out @bkhmsi.bsky.social 's great work on mixture-of-expert models that are specialized to represent the behavior of known brain networks.
🚨 New Preprint!!
Thrilled to share with you our latest work: “Mixture of Cognitive Reasoners”, a modular transformer architecture inspired by the brain’s functional networks: language, logic, social reasoning, and world knowledge.
1/ 🧵👇
Thrilled to share with you our latest work: “Mixture of Cognitive Reasoners”, a modular transformer architecture inspired by the brain’s functional networks: language, logic, social reasoning, and world knowledge.
1/ 🧵👇
June 17, 2025 at 3:07 PM
🚨 New Preprint!!
Thrilled to share with you our latest work: “Mixture of Cognitive Reasoners”, a modular transformer architecture inspired by the brain’s functional networks: language, logic, social reasoning, and world knowledge.
1/ 🧵👇
Thrilled to share with you our latest work: “Mixture of Cognitive Reasoners”, a modular transformer architecture inspired by the brain’s functional networks: language, logic, social reasoning, and world knowledge.
1/ 🧵👇
Excited to present tomorrow at the @c3nlp.bsky.social workshop at #NAACL2025 our position paper:
"Hire Your Anthropologist!" 🎓
Led by the amazing Mai Alkhamissi & @lrz-persona.bsky.social, under the supervision of @monadiab77.bsky.social. Don’t miss it! 😄
arXiv link coming soon!
"Hire Your Anthropologist!" 🎓
Led by the amazing Mai Alkhamissi & @lrz-persona.bsky.social, under the supervision of @monadiab77.bsky.social. Don’t miss it! 😄
arXiv link coming soon!
May 4, 2025 at 12:53 AM
Excited to present tomorrow at the @c3nlp.bsky.social workshop at #NAACL2025 our position paper:
"Hire Your Anthropologist!" 🎓
Led by the amazing Mai Alkhamissi & @lrz-persona.bsky.social, under the supervision of @monadiab77.bsky.social. Don’t miss it! 😄
arXiv link coming soon!
"Hire Your Anthropologist!" 🎓
Led by the amazing Mai Alkhamissi & @lrz-persona.bsky.social, under the supervision of @monadiab77.bsky.social. Don’t miss it! 😄
arXiv link coming soon!
Excited to be at #NAACL2025 in Albuquerque! I’ll be presenting our paper “The LLM Language Network” as an Oral tomorrow at 2:00 PM in Ballroom C, hope to see you there!
Looking forward to all the discussions! 🎤 🧠
Looking forward to all the discussions! 🎤 🧠
April 30, 2025 at 12:38 AM
Excited to be at #NAACL2025 in Albuquerque! I’ll be presenting our paper “The LLM Language Network” as an Oral tomorrow at 2:00 PM in Ballroom C, hope to see you there!
Looking forward to all the discussions! 🎤 🧠
Looking forward to all the discussions! 🎤 🧠
Reposted by Badr AlKhamissi
Before ICLR 2025 comes to an end today, a few #NeuroAI impressions from Singapore.
First, very happy to present our work on TopoLM as an oral, here with
@neilrathi.bsky.social
initial thread: bsky.app/profile/hann...
paper: doi.org/10.48550/arX...
code: github.com/epflneuroailab
First, very happy to present our work on TopoLM as an oral, here with
@neilrathi.bsky.social
initial thread: bsky.app/profile/hann...
paper: doi.org/10.48550/arX...
code: github.com/epflneuroailab
April 28, 2025 at 3:17 AM
Before ICLR 2025 comes to an end today, a few #NeuroAI impressions from Singapore.
First, very happy to present our work on TopoLM as an oral, here with
@neilrathi.bsky.social
initial thread: bsky.app/profile/hann...
paper: doi.org/10.48550/arX...
code: github.com/epflneuroailab
First, very happy to present our work on TopoLM as an oral, here with
@neilrathi.bsky.social
initial thread: bsky.app/profile/hann...
paper: doi.org/10.48550/arX...
code: github.com/epflneuroailab
Not at #ICLR2025 this year, but excited that @neilrathi.bsky.social and @hannesmehrer.bsky.social will be presenting our TopoLM paper during Friday’s Oral Session 4C. Don’t miss it!
Together with
@neil_rathi
I will present our #ICLR2025 Oral paper on TopoLM, a topographic language model!
Oral: Friday, 25 Apr 4:18 p.m. (session 4C)
Poster: Friday, 25 Apr 10 a.m. --> Hall 3 + Hall 2B Paper: arxiv.org/abs/2410.11516
Code and weights: github.com/epflneuroailab
@neil_rathi
I will present our #ICLR2025 Oral paper on TopoLM, a topographic language model!
Oral: Friday, 25 Apr 4:18 p.m. (session 4C)
Poster: Friday, 25 Apr 10 a.m. --> Hall 3 + Hall 2B Paper: arxiv.org/abs/2410.11516
Code and weights: github.com/epflneuroailab
April 23, 2025 at 11:49 AM
Not at #ICLR2025 this year, but excited that @neilrathi.bsky.social and @hannesmehrer.bsky.social will be presenting our TopoLM paper during Friday’s Oral Session 4C. Don’t miss it!
With the Studio Ghibli AI trend taking over the internet, it's a good moment to reshare a blog post I wrote two years ago: The Curse of "Creative" AI.
Interested to hear your thoughts on this matter!
medium.com/@bkhmsi/the-...
Interested to hear your thoughts on this matter!
medium.com/@bkhmsi/the-...
The Curse of ‘Creative’ AI
Should we create art using AI?
medium.com
March 31, 2025 at 6:55 PM
With the Studio Ghibli AI trend taking over the internet, it's a good moment to reshare a blog post I wrote two years ago: The Curse of "Creative" AI.
Interested to hear your thoughts on this matter!
medium.com/@bkhmsi/the-...
Interested to hear your thoughts on this matter!
medium.com/@bkhmsi/the-...
🚨 New Preprint!!
LLMs trained on next-word prediction (NWP) show high alignment with brain recordings. But what drives this alignment—linguistic structure or world knowledge? And how does this alignment evolve during training? Our new paper explores these questions. 👇🧵
LLMs trained on next-word prediction (NWP) show high alignment with brain recordings. But what drives this alignment—linguistic structure or world knowledge? And how does this alignment evolve during training? Our new paper explores these questions. 👇🧵
March 5, 2025 at 3:58 PM
🚨 New Preprint!!
LLMs trained on next-word prediction (NWP) show high alignment with brain recordings. But what drives this alignment—linguistic structure or world knowledge? And how does this alignment evolve during training? Our new paper explores these questions. 👇🧵
LLMs trained on next-word prediction (NWP) show high alignment with brain recordings. But what drives this alignment—linguistic structure or world knowledge? And how does this alignment evolve during training? Our new paper explores these questions. 👇🧵
Reposted by Badr AlKhamissi
Lots of great news out of the EPFL NLP lab these last few weeks. We'll be at @iclr-conf.bsky.social and @naaclmeeting.bsky.social in April / May to present some of our work in training dynamics, model representations, reasoning, and AI democratization. Come chat with us during the conference!
February 25, 2025 at 9:18 AM
Lots of great news out of the EPFL NLP lab these last few weeks. We'll be at @iclr-conf.bsky.social and @naaclmeeting.bsky.social in April / May to present some of our work in training dynamics, model representations, reasoning, and AI democratization. Come chat with us during the conference!
It’s very inspiring how scientists throughout history turned technological limitations into innovation. Constraints spark creativity! ✨
Must watch video:
youtu.be/YdOXS_9_P4U?...
Must watch video:
youtu.be/YdOXS_9_P4U?...
Terence Tao on how we measure the cosmos | Distance ladder part 1
YouTube video by 3Blue1Brown
youtu.be
February 16, 2025 at 9:45 PM
It’s very inspiring how scientists throughout history turned technological limitations into innovation. Constraints spark creativity! ✨
Must watch video:
youtu.be/YdOXS_9_P4U?...
Must watch video:
youtu.be/YdOXS_9_P4U?...
Excited to share that our paper, 'The LLM Language Network,' has been accepted to NAACL 2025! Looking forward to presenting it in Albuquerque—see you there! 🏜️ #NAACL2025
🚨 New Paper!
Can neuroscience localizers uncover brain-like functional specializations in LLMs? 🧠🤖
Yes! We analyzed 18 LLMs and found units mirroring the brain's language, theory of mind, and multiple demand networks!
w/ @gretatuckute.bsky.social, @abosselut.bsky.social, @mschrimpf.bsky.social
🧵👇
Can neuroscience localizers uncover brain-like functional specializations in LLMs? 🧠🤖
Yes! We analyzed 18 LLMs and found units mirroring the brain's language, theory of mind, and multiple demand networks!
w/ @gretatuckute.bsky.social, @abosselut.bsky.social, @mschrimpf.bsky.social
🧵👇
February 14, 2025 at 1:08 PM
Excited to share that our paper, 'The LLM Language Network,' has been accepted to NAACL 2025! Looking forward to presenting it in Albuquerque—see you there! 🏜️ #NAACL2025
🚀 Exciting milestone: Egyptians in AI Research now features 170 amazing minds shaping the future of technology! 🌍💡
Know someone not listed? Submit their info and let's grow this inspiring community! 🤝
Explore here: bkhmsi.github.io/egyptians-in...
Know someone not listed? Submit their info and let's grow this inspiring community! 🤝
Explore here: bkhmsi.github.io/egyptians-in...
Egyptians in AI Research
A website dedicated to showcasing the profiles of prominent Egyptian researchers in the field of Artificial Intelligence.
bkhmsi.github.io
January 14, 2025 at 8:11 AM
🚀 Exciting milestone: Egyptians in AI Research now features 170 amazing minds shaping the future of technology! 🌍💡
Know someone not listed? Submit their info and let's grow this inspiring community! 🤝
Explore here: bkhmsi.github.io/egyptians-in...
Know someone not listed? Submit their info and let's grow this inspiring community! 🤝
Explore here: bkhmsi.github.io/egyptians-in...
Spent the final days of 2024 completely off the grid—no signal, no electricity—trekking through the Sinai mountains. It was peaceful, refreshing, and a welcome escape. Highly recommend stepping away from the digital world once in a while to rediscover the simple beauty nature has to offer ✨
January 3, 2025 at 11:40 AM
Spent the final days of 2024 completely off the grid—no signal, no electricity—trekking through the Sinai mountains. It was peaceful, refreshing, and a welcome escape. Highly recommend stepping away from the digital world once in a while to rediscover the simple beauty nature has to offer ✨
Despite its ups and downs, this year has been a success. I’d like to share some of my accomplishments with you, highlighted in the attached image. Here’s hoping 2025 brings even greater things for us all!
December 27, 2024 at 4:55 PM
Despite its ups and downs, this year has been a success. I’d like to share some of my accomplishments with you, highlighted in the attached image. Here’s hoping 2025 brings even greater things for us all!
Reposted by Badr AlKhamissi
Ablation of LLM "language network" units (dark blue) versus random units (light blue): bsky.app/profile/bkhm...
Moreover, this table shows the effect of ablations on next word prediction for a few sample models:
Moreover, this table shows the effect of ablations on next word prediction for a few sample models:
December 19, 2024 at 4:01 PM
Ablation of LLM "language network" units (dark blue) versus random units (light blue): bsky.app/profile/bkhm...
Moreover, this table shows the effect of ablations on next word prediction for a few sample models:
Moreover, this table shows the effect of ablations on next word prediction for a few sample models:
Reposted by Badr AlKhamissi
Check out @bkhmsi.bsky.social's summary of our new paper:
We identify "language network" units in LLMs using neuroscience approaches and show that ablating these units (but not random ones) drastically impair LLM language performance--moreover, these units better align with human brain data.
We identify "language network" units in LLMs using neuroscience approaches and show that ablating these units (but not random ones) drastically impair LLM language performance--moreover, these units better align with human brain data.
🚨 New Paper!
Can neuroscience localizers uncover brain-like functional specializations in LLMs? 🧠🤖
Yes! We analyzed 18 LLMs and found units mirroring the brain's language, theory of mind, and multiple demand networks!
w/ @gretatuckute.bsky.social, @abosselut.bsky.social, @mschrimpf.bsky.social
🧵👇
Can neuroscience localizers uncover brain-like functional specializations in LLMs? 🧠🤖
Yes! We analyzed 18 LLMs and found units mirroring the brain's language, theory of mind, and multiple demand networks!
w/ @gretatuckute.bsky.social, @abosselut.bsky.social, @mschrimpf.bsky.social
🧵👇
December 19, 2024 at 3:58 PM
Check out @bkhmsi.bsky.social's summary of our new paper:
We identify "language network" units in LLMs using neuroscience approaches and show that ablating these units (but not random ones) drastically impair LLM language performance--moreover, these units better align with human brain data.
We identify "language network" units in LLMs using neuroscience approaches and show that ablating these units (but not random ones) drastically impair LLM language performance--moreover, these units better align with human brain data.
🚨 New Paper!
Can neuroscience localizers uncover brain-like functional specializations in LLMs? 🧠🤖
Yes! We analyzed 18 LLMs and found units mirroring the brain's language, theory of mind, and multiple demand networks!
w/ @gretatuckute.bsky.social, @abosselut.bsky.social, @mschrimpf.bsky.social
🧵👇
Can neuroscience localizers uncover brain-like functional specializations in LLMs? 🧠🤖
Yes! We analyzed 18 LLMs and found units mirroring the brain's language, theory of mind, and multiple demand networks!
w/ @gretatuckute.bsky.social, @abosselut.bsky.social, @mschrimpf.bsky.social
🧵👇
December 19, 2024 at 3:06 PM
🚨 New Paper!
Can neuroscience localizers uncover brain-like functional specializations in LLMs? 🧠🤖
Yes! We analyzed 18 LLMs and found units mirroring the brain's language, theory of mind, and multiple demand networks!
w/ @gretatuckute.bsky.social, @abosselut.bsky.social, @mschrimpf.bsky.social
🧵👇
Can neuroscience localizers uncover brain-like functional specializations in LLMs? 🧠🤖
Yes! We analyzed 18 LLMs and found units mirroring the brain's language, theory of mind, and multiple demand networks!
w/ @gretatuckute.bsky.social, @abosselut.bsky.social, @mschrimpf.bsky.social
🧵👇