Dominik Dold
banner
dodo47.bsky.social
Dominik Dold
@dodo47.bsky.social
Physics Dr. 🧙‍♂️ interested in intelligence, both artificial 🤖 and biological 🧠 Marie Curie Fellow at Uni Vienna 🥐☕️ Prev. ESA ACT & Siemens. He/him.
2. The more causal pieces the training data falls into before training, the higher the chances that the network trains successfully and reaches a high performance 📈 Hence, this measure can be used to guide initialisation of spiking neural networks.
May 2, 2025 at 8:07 AM
We found that the number of such causal pieces has some cool properties:

1. The approximation error is lower bounded by an expression depending on the inverse squared of the number of causal pieces. More pieces, less error (which does not mean better generalization though)!
May 2, 2025 at 8:07 AM
A causal piece is, quite literally, a piece of the input (and parameter) space where the network output is always caused by the same network components. Or simply put: the path through the network stays the same.

That's all the differently coloured regions shown above - one colour 🟩 = 🧩 one piece!
May 2, 2025 at 8:07 AM
In spiking neural networks, neurons communicate - as in the brain - via short electrical pulses⚡(spikes). But how can we formally quantify the (dis)advantages of using spikes? 🤔

In our new preprint, @pc-pet.bsky.social and I introduce the concept of "Causal Pieces" to approach this question!
May 2, 2025 at 8:07 AM