Building a Natural Science of Intelligence 🧠🤖
Prev: ICoN Postdoctoral Fellow @MIT, PhD @Stanford NeuroAILab
Personal Website: https://cs.cmu.edu/~anayebi
Slides: www.cs.cmu.edu/~mgormley/co...
Full course info: bsky.app/profile/anay...
Slides: www.cs.cmu.edu/~mgormley/co...
Full course info: bsky.app/profile/anay...
Detailed summary: bsky.app/profile/reec...
Detailed summary: bsky.app/profile/reec...
Philippe's seminal work is in fact what our recent closed form UBI AI capability threshold builds on: bsky.app/profile/anay...
Philippe's seminal work is in fact what our recent closed form UBI AI capability threshold builds on: bsky.app/profile/anay...
www.youtube.com/watch?v=Oajq...
www.youtube.com/watch?v=Oajq...
@ithobani.bsky.social's thread for more details on comparing brains to machines!
Preprint: arxiv.org/abs/2510.02523
@ithobani.bsky.social's thread for more details on comparing brains to machines!
Read more here: www.newsweek.com/ai-taking-jo...
Read more here: www.newsweek.com/ai-taking-jo...
We show each of these amounts to finetuning a different aspect of the Transformer.
We show each of these amounts to finetuning a different aspect of the Transformer.
In my recent @cmurobotics.bsky.social seminar talk, “Using Embodied Agents to Reverse-Engineer Natural Intelligence”,
In my recent @cmurobotics.bsky.social seminar talk, “Using Embodied Agents to Reverse-Engineer Natural Intelligence”,
We found that Convolutional Recurrent Neural Networks (ConvRNNs) pass the NeuroAI Turing Test in currently available mouse somatosensory cortex data.
New paper by @Yuchen @Nathan @anayebi.bsky.social and me!
and @leokoz8.bsky.social !
We show how autonomous behavior and whole-brain dynamics emerge in embodied agents with intrinsic motivation driven by world models.
Slides: www.cs.cmu.edu/~mgormley/co...
Full course info: bsky.app/profile/anay...
Slides: www.cs.cmu.edu/~mgormley/co...
Full course info: bsky.app/profile/anay...
Slides: www.cs.cmu.edu/~mgormley/co...
Full course info: bsky.app/profile/anay...
Slides: www.cs.cmu.edu/~mgormley/co...
Full course info: bsky.app/profile/anay...
We also explain the historical throughline to some of these ideas, inspired by Nobel-prize-winning observations in neuroscience!
We also explain the historical throughline to some of these ideas, inspired by Nobel-prize-winning observations in neuroscience!
www.iliadconference.com
www.iliadconference.com
Today we discussed the Transformer architecture & Multi-Headed Attention.
Follow along 👇 if you want to learn more about the tech that's powering today's AI, from ChatGPT to reasoning models to agents!
Today we discussed the Transformer architecture & Multi-Headed Attention.
Follow along 👇 if you want to learn more about the tech that's powering today's AI, from ChatGPT to reasoning models to agents!
Key AI safety takeaways:
🧠 Too many values ⇒ makes alignment intractable
👁 Task-space growth ⇒ oversight failure
(continued below
🧵👇)
We mathematically prove the answer is *yes*, and outline key properties for a "safe yet capable" agent. 🧵👇
Paper: arxiv.org/abs/2502.05934
Key AI safety takeaways:
🧠 Too many values ⇒ makes alignment intractable
👁 Task-space growth ⇒ oversight failure
(continued below
🧵👇)
We give the first provable framework that makes it implementable—unlike RLHF or Constitutional AI, which can fail when goals conflict.
🧵👇
We give the first provable framework that makes it implementable—unlike RLHF or Constitutional AI, which can fail when goals conflict.
🧵👇
A PyTorch library for biologically-inspired temporal neural nets: unrolling computation through time. Integrates with our recent Encoder-Attender-Decoder, which flexibly combines models (Transformer, SSM, RNN) since no single one fits all sequence tasks.
🧵👇
A PyTorch library for biologically-inspired temporal neural nets: unrolling computation through time. Integrates with our recent Encoder-Attender-Decoder, which flexibly combines models (Transformer, SSM, RNN) since no single one fits all sequence tasks.
🧵👇
Let me break down what’s in the bill and why it’s a full-scale land grab. 🧵