Ex cofounder & CTO Vicarious AI (acqd by Alphabet),
Cofounder Numenta
Triply EE (BTech IIT-Mumbai, MS&PhD Stanford). #AGIComics
blog.dileeplearning.com
blog.dileeplearning.com/p/ai-conscio...
blog.dileeplearning.com/p/ai-conscio...
TLDR: It was fun and the process felt 'magical' at times. If you have lots of small project ideas you want to prototype, vibe-coding is a fun way to do that as long as you are willing to settle for 'good enough'.
TLDR: It was fun and the process felt 'magical' at times. If you have lots of small project ideas you want to prototype, vibe-coding is a fun way to do that as long as you are willing to settle for 'good enough'.
www.agicomics.net/c/ag-breakth...
www.agicomics.net/c/ag-breakth...
#MLSky #AI #neuroscience
#MLSky #AI #neuroscience
🚨 New preprint! 🚨
Excited and proud (& a little nervous 😅) to share our latest work on the importance of #theta-timescale spiking during #locomotion in #learning. If you care about how organisms learn, buckle up. 🧵👇
📄 www.biorxiv.org/content/10.1...
💻 code + data 🔗 below 🤩
#neuroskyence
🚨 New preprint! 🚨
Excited and proud (& a little nervous 😅) to share our latest work on the importance of #theta-timescale spiking during #locomotion in #learning. If you care about how organisms learn, buckle up. 🧵👇
📄 www.biorxiv.org/content/10.1...
💻 code + data 🔗 below 🤩
#neuroskyence
Their reflections span philosophy, complexity, and the limits of scientific explanation.
www.sainsburywellcome.org/web/blog/wil...
Illustration by @gilcosta.bsky.social & @joanagcc.bsky.social
Their reflections span philosophy, complexity, and the limits of scientific explanation.
www.sainsburywellcome.org/web/blog/wil...
Illustration by @gilcosta.bsky.social & @joanagcc.bsky.social
Evolution could create structures in the brain that are in correspondence with structure in the world.
The brain cannot generate information about the world de novo, it's impossible.
All the brain can do is:
1. Selectively remove info that is irrelevant.
2. Re-emit info previously absorbed via evolution or memory.
Our brain never "creates" information. Never.
🧠📈 🧪
Evolution could create structures in the brain that are in correspondence with structure in the world.
..but then I was quite baffled because our CSCG work seem to have tackled many of these problems in a more general setting and it's not even mentioned!
So I asked ChatGPT... ...I'm impressed by the answer1. 1/🧵
..but then I was quite baffled because our CSCG work seem to have tackled many of these problems in a more general setting and it's not even mentioned!
So I asked ChatGPT... ...I'm impressed by the answer1. 1/🧵
www.biorxiv.org/content/10.1...
1/
www.biorxiv.org/content/10.1...
1/
We introduce the idea of "importance" in terms of the extent to which a region's signals steer/contribute to brain dynamics as a function of brain state.
Work by @codejoydo.bsky.social
elifesciences.org/reviewed-pre...
We introduce the idea of "importance" in terms of the extent to which a region's signals steer/contribute to brain dynamics as a function of brain state.
Work by @codejoydo.bsky.social
elifesciences.org/reviewed-pre...
Some of the features that John rightly criticizes in AI writing are shared by the sort of committee reports and consensus papers that emerge from workshops and symposia because someone felt that there had to be a “product“ associated with the meeting.
meresophistry.substack.com/p/the-mental...
Can you give me hardware for creating 100B parameter Boltzmann machines?
Building hardware to run a specific algorithm is hard enough.
Building hardware that can run a specific algorithm *and* scale up to billions of parameters is super duper hard.
It's not just a matter of money... It's a scientific problem!
As to SOTA - transformers are likely SOTA because they scale so well, which is largely because they won the "hardware lottery".
I am willing to bet other models would be SOTA if we could train them at similar scales.
What other such hypotheses are there? Is there a space of such hypotheses, or is this just one?
(Also, how does this match with Konrad's @kordinglab.bsky.social argument about being hypothesis-free?)
I'll give just one example:
In the brain, attention mechanisms often operate in a top-down manner, something missing from most transformer models.
What other such hypotheses are there? Is there a space of such hypotheses, or is this just one?
(Also, how does this match with Konrad's @kordinglab.bsky.social argument about being hypothesis-free?)
Why aren't you convinced this is how it works? It's SOTA on all the tasks....
You are not convinced because it doesn't match some brain data?
Why aren't you convinced this is how it works? It's SOTA on all the tasks....
You are not convinced because it doesn't match some brain data?
How do you know that current systems AI are NOT how the brain works?
How do you know that current systems AI are NOT how the brain works?
How do you know that? The current empirical studies strongly support the idea that brain is a large transformer.....
How do you know that? The current empirical studies strongly support the idea that brain is a large transformer.....
With commentary from several wonderful researchers!
🧠📈 #NeuroAI 🧪
www.thetransmitter.org/neuroai/acce...