Ido Aizenbud
idoai.bsky.social
Ido Aizenbud
@idoai.bsky.social
Computational Neuroscience PhD Student
Exactly! 💯
August 2, 2025 at 12:03 PM
Dive into the full review here ➡️ doi.org/10.3389/fnin...

We would love to hear where you think single-neuron modeling should go next!
Frontiers | Biophysical and computational insights from modeling human cortical pyramidal neurons
The human brain’s remarkable computational power enables parallel processing of vast information, integrating sensory inputs, memories, and emotions for rapi...
doi.org
August 1, 2025 at 10:30 AM
Looking ahead, we reckon the next leap will come from combining high-res EM of entire human neurons with vol-imaging tools. Hybrid biophysical/AI models promise to clarify how single-cell properties scale up to network dynamics—and ultimately to circuits underlying language, creativity, and memory.
August 1, 2025 at 10:30 AM
Human neurons are more functionally complex: their richer morphology and stronger synaptic nonlinearities give them extra computational power. Fitting deep neural nets to match human input-output dynamics consistently required greater network depth than for rodent models.
August 1, 2025 at 10:30 AM
Single human neurons are wired to perform nontrivial logical computations! Due to extensive dendritic branching and specialized vol-gated currents, HL2/3 PNs support ~25 independent NMDA-spike compartments—almost twice than rat neurons—and can implement XOR-like operations via dendritic Ca²⁺ spikes.
August 1, 2025 at 10:30 AM
Dendritic load imposed on human dendrites accelerates EPSP propagation down the dendrites, while dendritic high‐density of h-channels enables faithful transfer of theta-band signals (that are associated with various learning and memory processes) from dendrites to soma.
August 1, 2025 at 10:30 AM
Load is all you need. The extensive membrane surface area of the dendrites “loads” the soma with additional capacitance and conductance. This load imposed on the AIS makes action potentials at the soma remarkably “kinky” – with a steeper rise of voltage, yet sensitive to rapid input fluctuations.
August 1, 2025 at 10:30 AM
We bring together decades of human tissue recordings, detailed biophysical models, and machine-learning techniques to try and answer these questions.

Here are some key insights:
August 1, 2025 at 10:30 AM
In our new mini-review “Biophysical and computational insights from modeling human cortical pyramidal neurons” (doi.org/10.3389/fnin...) in
@frontiersin.bsky.social, with Sapir Shapira, @danielay1.bsky.social, Yoni Leibner, Huib Mansvelder, Christiaan de Kock, @mikilon.bsky.social, and Idan Segev,
Frontiers | Biophysical and computational insights from modeling human cortical pyramidal neurons
The human brain’s remarkable computational power enables parallel processing of vast information, integrating sensory inputs, memories, and emotions for rapi...
doi.org
August 1, 2025 at 10:30 AM
Thanks! What kind of complexity measurements do you refer to?
April 16, 2025 at 8:59 AM
Great stuff! Can you add me as well?
January 8, 2025 at 12:22 PM
Indeed, the human synapse has a strong effect on complexity, so even a rat morphology with human synapse will be much more complex than the same morphology with a rat synapse, as you can see in panels L and M, the effect is more pronounced within rat morphologies (meaning, smaller morphologies).
December 31, 2024 at 1:13 PM
In this case, estimating the complexity of the neurons using the entropy of the weights is intersting, it is similar to checking whether there's a simpler DNN that will give the same perfect fit.
December 31, 2024 at 1:08 PM
In principal, if you take a DNN that is expressive enough, it should perfectly fit the function of all neurons, so in this case, the FCI would be 0 for all of the neurons.
December 31, 2024 at 1:08 PM
2. We have not tried to use Encoder-Decoder systems, but it is certainly possible and may be relevant.
December 29, 2024 at 8:07 AM
1. We followed the architecture introduced in Beniaguev et al., 2021, and chose a depth of 3 that was enough to capture the variability in our model dataset. We also repeated some of the experiments with alternative depths of 2 and 7, and the results stayed similar (see discussion for more details).
December 29, 2024 at 8:07 AM
Conceptually the same approach could be used to measure the functional complexity of biological neural networks, and actually of any model that we can feed with random inputs and simulate to get the outputs.
December 27, 2024 at 2:54 PM
We argue this is a scalable approach to efficiently approximate the mutual information between the inputs and the outputs of a function (in this case, the function of a single neuron), therefore serving as a useful measure of functional complexity.
December 27, 2024 at 12:55 PM
I'm not sure I understand your question, can you repeat it?
December 27, 2024 at 12:48 PM
Size is indeed one of the important factors, but it is not the only one.
December 27, 2024 at 12:47 PM