Brad Aimone
banner
jbimaknee.bsky.social
Brad Aimone
@jbimaknee.bsky.social
Computational neuroscientist-in-exile; computational neuromorphic computing; putting neurons in HPC since 2011; dreaming of a day when AI will actually be brain-like.
I'm not really complaining about small models as much as small frameworks. If anything, that you can see so much from small SNNs is a testament to how immensely powerful a brain sized SNN can be.

Whereas a lot of neuro wants to compress whole cortical columns to scalar values.
November 20, 2025 at 6:21 PM
But yeah, it all comes down to being able to avoid worshiping the toy model and keep honest when evaluating it relative to scale. Many of these reduced neural concepts ("manifolds", oscillations, etc) do not necessarily scale well, even if they "explain" our smallish data sets.
November 20, 2025 at 5:24 PM
Everyone has to start somewhere. But we have to keep in mind the guess may be wrong or that the relative merits may be misleading when simplified

ANNs notoriously only became interesting at scale. They were dismissed and mocked because small ANNs couldnt beat hand drawn features.
November 20, 2025 at 5:24 PM
The problem with 'understand the low D first' is we may not have the right toy models to begin with so it is impossible to progress to anywhere useful. It is a chicken and egg thing, until you understand a system, can you really abstract it?

I fully get what you're saying though.
November 20, 2025 at 4:55 PM
That isn't to say we shouldn't seek abstractions. But those abstractions should be based on computation and parallel computing models and concepts. They shouldn't be driven by how we can visualize and analyze the limited data we can collect.
November 20, 2025 at 2:42 PM
There is zero basis to assume that the brain should be simple and interpretable. We have 100+ years of NOT understanding it to support the opposite - the brain is clearly hard; so why try to force it to be trivial?
November 20, 2025 at 2:42 PM
Most neuroscientists would agree that the brain is far more sophisticated than ANNs - so why would we force tools and interpretations that are too trivial to explain ANNs today, much less the brain? We wouldn't.
November 20, 2025 at 2:42 PM
The irony is that ANNs are a proof of the opposite - they are high dimensional and cannot be easily described in a few dimensions. That's partly why people claim they're mysterious. But of course we understand how ANNs work, they just aren't simple to plot on 3 dimensions.
November 20, 2025 at 2:42 PM
This is pervasive in neuroscience now (and maybe it has been forever). Oscillations, manifolds, dimensionality reduction, mean fields, etc. All implicitly assume we can simplify our many billions-dimensional system to something we can understand (effectively 3 or 4 dimensions).
November 20, 2025 at 2:42 PM
Self-serving, I'll put our recent paper on a NeuroAI cortex-inspired approach to solving real-world physics problems (with SOTA numerical accuracy), which flipped around is suggestive that the motor cortex may very well be solving real-world problems in this manner.
www.nature.com/articles/s42...
Solving sparse finite element problems on neuromorphic hardware - Nature Machine Intelligence
Theilman and Aimone introduce a natively spiking algorithm for solving partial differential equations on large-scale neuromorphic computers and demonstrate the algorithm on Intel’s Loihi 2 neuromorphi...
www.nature.com
November 20, 2025 at 1:52 PM
The analog computing in neurons, particularly within dendrites, likely allows the brain to compress a lot of complex computation into a very small space. We absolutely need to capture that in neural computational models and hardware. But that should be to complement digital, not to bury it.
November 19, 2025 at 4:30 PM
If analog was so amazing, evolution wouldn't have invented spikes to scale up neural computation. C Elegans and similar pure analog neural systems would rule the earth

Digital computers-like the ones everyone is reading this post on-rule the day because digital is, on average, better for computing
November 19, 2025 at 4:30 PM
Bottom line, it is just part of the cost. We get paid to do science, most of us by our communities through grants in some form. Part of doing that science is communicating and sharing those results. That's part of the cost, it is part of the budget, it is part of the obligation.
November 18, 2025 at 6:13 PM
All sharing (data, code, etc) is a lot of work. I recall spending many days reformatting and cleaning up the ugly Matlab code from my PhD thesis into a form suitable to share when requested by another group. This was before code sharing was standard. I learned to have sharing in mind from the start.
November 18, 2025 at 6:13 PM
I guess I'm confused; I don't know any theorist that would ask for raw unpublished data, ask for extensive processing and analysis, and then just include that in a paper without including those people as authors. That obviously would be sketchy...
(and useless, as any reviewer should question that)
November 18, 2025 at 6:01 PM
Obviously one should give credit. Just as an experimentalist who bases their experimental design on prior theoretical work should cite them and give credit. No one says otherwise.
November 18, 2025 at 4:59 PM
I don't think experimentalists appreciate that a good data set can enable citations far beyond imagination. MNIST, CIFAR, etc have hundreds of thousands of cites.

People complain about citations, but they are a currency and they are valid, especially at scale. I'm grateful for any citations I get.
November 18, 2025 at 2:06 PM
This confused me, did federal taxpayer funded grants pay for the research? Then yes, it should be free.
November 18, 2025 at 1:33 PM
This isn't to knock experiments but theory has to push the experimentalists to new ways of thinking, not the other way around

This happened in mol bio 25 years ago. When the genome was sequenced, people initially still thought one gene at a time. But there was a switch and people now think bigger.
November 17, 2025 at 2:53 PM
I remember that thread!

This is easily apparent in meetings like SfN, where I suspect 50% of the abstracts could be from 2015 or even 2005 (we should have an LLM try to guess the years of SfN abstracts...). The rate is too slow. We have to think differently.
November 17, 2025 at 2:49 PM
This doesn't scale. It will never scale. For every "I need to see how L2/3 V1 neurons interact with L2/3 V2 neurons" answered, there are about 10 more pairwise questions that need to be answered.

We *have* to stop thinking about one question at a time.
November 17, 2025 at 1:50 PM
A challenge in neuro is that data is too often collected in the context of a narrow experiment, such that the data isn't useful for the next question. It's optimized for high-impact papers, PhD theses, etc. It isn't meant to help rise all ships. So theorists always have to ask for more.
November 17, 2025 at 1:50 PM
So stay tuned! Linking applications like this and neuromorphic to connectomes and functional data is the next step. And of course, solving sparse linear systems is a powerful and well studied area of applied math, so incorporating that domain knowledge into neuroscience will be exciting! 10/10
November 16, 2025 at 4:16 PM