David Beniaguev
banner
davidbeniaguev.bsky.social
David Beniaguev
@davidbeniaguev.bsky.social
For fun and work, I build Generative Models.

Computational Neuroscience PhD.
Was trying to understand the brain to help build AI, but it appears it's no longer necessary..

github: https://github.com/SelfishGene
Reposted by David Beniaguev
What makes human pyramidal neurons uniquely suited for complex information processing? How can human neurons’ distinct properties contribute to our advanced cognitive abilities?
August 1, 2025 at 10:30 AM
New substack post that I decided to write on a whim today

It summarizes my thoughts about what is good now and what soon could be even better

open.substack.com/pub/davidben...
Science is a decentralized civilization-wide collaborative effort
Anyone can contribute to science, the protocol is simple, just upload a document to the web and if someone finds it useful, they will build upon it and cite it
open.substack.com
February 27, 2025 at 1:34 PM
Reposted by David Beniaguev
Now out in PLOS CB!

We propose a simple, perceptron-like neuron model, the calcitron, that has four sources of [Ca2+]...We demonstrate that by modulating the plasticity thresholds and calcium influx from each calcium source, we can reproduce a wide range of learning and plasticity protocols.
The calcitron: A simple neuron model that implements many learning rules via the calcium control hypothesis
Author summary Researchers have developed various learning rules for artificial neural networks, but it is unclear how these rules relate to the brain’s natural processes. This study focuses on the ca...
journals.plos.org
February 19, 2025 at 4:46 PM
It seems more plausible every month that PIs might soon greatly reduce the pace in which they take on new students and impose significantly higher standards in recruiting

This would be a "vote with their feet" type of way to know if "grad student turing test" is passed in practice
I can easily see how several years from now it will be very hard to acquire classical PhD training in a theoretical field

Mainly due to language models maturing into research assistants that from an advisor point of view are simply better than an average PhD student
February 4, 2025 at 12:03 PM
New work from our lab by @idoai.bsky.social and @danielay1.bsky.social

What makes human cortical neurons have such a complex I/O function as compared to rat neurons?

It turns out it's not just about their size

All details in the thread by Ido

bioRxiv: www.biorxiv.org/content/10.1...
December 26, 2024 at 5:40 PM
o3 sounds on paper like the perfect first year PhD student

Infinitely hardworking,
infinitely knowledgeable,
is exceptionally technically competent,
reads instantly everything you send it, and immediately starts working on it
December 21, 2024 at 9:58 AM
I strongly suspect that 2025-2027 will produce a huge amount of "miracle years" by scientists

Scientists often have accumulated decades of insights and were always lacking in grant funding and in enthusiastic, capable, and hardworking students to help them - will soon be unlocked
December 6, 2024 at 5:02 PM
I can easily see how several years from now it will be very hard to acquire classical PhD training in a theoretical field

Mainly due to language models maturing into research assistants that from an advisor point of view are simply better than an average PhD student
December 1, 2024 at 8:37 PM
I remember shortly after first learning matlab in 2nd year undergrad feeling "wow, I can think of things to do, and then nearly instantly do them and see the result"

It felt magical

Weirdly, ideas increased in complexity and these days always somehow feel at least 1-3 months of hard work away
November 28, 2024 at 5:24 PM
Wondering how hard it will be to create a self-made feed algorithm for bluesky

Is the source code for the 'discover' or 'following' feeds available somewhere? Is it tinkerable/tunable?

How do these things work here?
November 25, 2024 at 5:03 PM
My secret hope is that this site picks up steam so that twitter will be forced to respond by reverting all the bad changed they introduced in recent ~2 years

Worst changes:
1) suppression of link sharing
2) tiktokification of news feed (engagement over substance)
November 24, 2024 at 9:34 PM
Just joined, nothing to post so here is a link to my substack

I have two posts there, it was not really possible to share on twitter, so maybe here?

Titles:
1) Why I can no longer pursue a career in academia (Sep 2024)
2) Obvious next steps in AI research (Sep 2023)

davidbeniaguev.substack.com
David’s Substack | David Beniaguev | Substack
My personal Substack. Click to read David’s Substack, by David Beniaguev, a Substack publication. Launched a year ago.
davidbeniaguev.substack.com
November 24, 2024 at 8:59 PM