Toviah Moldwin
tmoldwin.bsky.social
Toviah Moldwin
@tmoldwin.bsky.social
Computational neuroscience: Plasticity, learning, connectomics.
(At high spatiotemporal resolution.)
May 17, 2025 at 8:27 AM
In the grand scheme of things the main thing that matters is advances in microscopy and imaging methods. Almost all results in neuroscience are tentative because we can't see everything that's happening at the same time.
May 17, 2025 at 8:26 AM
I also have a stack of these, I call it 'apocalypse food'.
May 16, 2025 at 7:30 PM
You are correct about this.
May 16, 2025 at 7:10 PM
But so is every possible mapping,so the choice of a specific mapping is not contained within the data. Even the fact that the training data comes in X,y pairs is not sufficient to provide a mapping that generalizes in a specific way. The brain chooses a specific algorithm that generalizes well.
May 16, 2025 at 7:10 PM
(Consider that one can create an arbitrary mapping between a set of images and a set of two labels, thus the choice of a specific mapping is a reduction of entropy and thus constitutes information.)
May 16, 2025 at 6:41 PM
The set of weights that correctly classifies images as cats or dogs contains information that is not contained either in the set of training images or in the set of labels.
May 16, 2025 at 6:38 PM
Learning can generate information about the *mapping* between the object and the category. It doesn't generate information about the object (by itself) or the category (by itself) but the mapping is not subject to the data processing inequality for the data or the category individually.
May 16, 2025 at 6:36 PM
GPT is already pretty good at this. Maybe not perfect, but possibly as good as the median academic.
May 16, 2025 at 6:46 AM
What do you mean by 'generate information'? What is an example of someone making this sort of claim?
May 15, 2025 at 7:14 PM
Paying is best. Reviews should mostly be done by advanced grad students/postdocs who could use the cash.
May 13, 2025 at 7:41 PM
Why wouldn't you want your papers to be LLM-readable?
May 7, 2025 at 3:21 PM
If such a value to society exists, it should not be difficult for the PhD student to figure out how to articulate it themselves. A lack of independence of thought when it comes to this sort of thing would be much more concerning.
May 4, 2025 at 3:14 PM
Oh you were on that? Small world.
May 4, 2025 at 7:44 AM
But I do think in our efforts to engage with the previous work on this, we made this paper overly long and technical. We present the bottom-line formulation of the plasticity rule in the Calcitron paper.
May 3, 2025 at 8:20 PM
One of the reasons we wrote this paper is because the calcium control is a great theory but there were two semi-conflicting mathematical forumlations of it, both of which had some inelegenancies. I think we managed to clean them up, and made it more 'theory'-like.

link.springer.com/article/10.1...
A generalized mathematical framework for the calcium control hypothesis describes weight-dependent synaptic plasticity - Journal of Computational Neuroscience
The brain modifies synaptic strengths to store new information via long-term potentiation (LTP) and long-term depression (LTD). Evidence has mounted that long-term synaptic plasticity is controlled vi...
link.springer.com
May 3, 2025 at 8:20 PM
I know that e.g. Yuri Rodrigues has a paper that incorporates second messengers but at that point it's not really parsimonous any moren
May 3, 2025 at 8:05 PM
The leading theory for plasticity is calcium control, which I've done some work on. I do think that I've contributed on that front with the Calcitron and the FPLR framework which came out in the past few months. Anything beyond calcium control gets into simulation territory.
May 3, 2025 at 8:05 PM
The reason why it's less active now is because people kind of feel that single neuron theory has been solved. Like the LIF/Cable theory models are still pretty much accepted. Any additional work would almost necessarily add complexity and that complexity is mostly not needed for 'theory' questions.
May 3, 2025 at 4:22 PM
Hebbian learning? Associative attractor networks (e.g. Hopfield)? Calcium control hypothesis? Predictive coding? Efficient coding? There are textbooks about neuro theory.
May 3, 2025 at 1:28 PM
I kind of like the size of the single neuron theory community, it's the right size. The network theory community is IMHO way too big, there are like thousands of papers about Hopfield networks, that's probably too much.
May 3, 2025 at 1:23 PM
Not really true, there are a bunch of people doing work on e.g. single neuron biophysics, plasticity models, etc. Definitely not as big of a field but we exist.
May 3, 2025 at 1:20 PM