Gabriel Béna 🌻
@solarpunkgabs.bsky.social
PhD Student at Imperial College with Dan Goodman. Pretending to be a neuro guy. Modularity, structure-function, resource-constrained ANNs/SNNs, neuromorphic + fun stuff like Neural Cellular Automatas 😎 Also working w/ SpiNNCloud on SpiNNaker2.
We'll be presenting this at #GECCO2025!! Come say hi if you're around ☀️
July 3, 2025 at 2:43 PM
We'll be presenting this at #GECCO2025!! Come say hi if you're around ☀️
The REAL question on everyone's lips though...
Blog: gabrielbena.github.io/blog/2025/be...
Thread: bsky.app/profile/sola...
Blog: gabrielbena.github.io/blog/2025/be...
Thread: bsky.app/profile/sola...
June 5, 2025 at 5:05 PM
The REAL question on everyone's lips though...
Blog: gabrielbena.github.io/blog/2025/be...
Thread: bsky.app/profile/sola...
Blog: gabrielbena.github.io/blog/2025/be...
Thread: bsky.app/profile/sola...
Taking it even further: We're developing a graph-based "Hardware Meta-Network"!
Users define tasks as intuitive graphs (nodes = regions, edges = operations), and a GNN + coordinate-MLP generates the hardware configuration!
It's literally a compiler from human intent → NCA computation! 🤖
Users define tasks as intuitive graphs (nodes = regions, edges = operations), and a GNN + coordinate-MLP generates the hardware configuration!
It's literally a compiler from human intent → NCA computation! 🤖
June 4, 2025 at 6:37 PM
Taking it even further: We're developing a graph-based "Hardware Meta-Network"!
Users define tasks as intuitive graphs (nodes = regions, edges = operations), and a GNN + coordinate-MLP generates the hardware configuration!
It's literally a compiler from human intent → NCA computation! 🤖
Users define tasks as intuitive graphs (nodes = regions, edges = operations), and a GNN + coordinate-MLP generates the hardware configuration!
It's literally a compiler from human intent → NCA computation! 🤖
Our approach also enables task composition, meaning we can chain operations together!
Example: Distribute matrix → Multiply → Rotate → Return to original position
It's like programming, but the "execution" is continuous dynamics! We're building a neural compiler!
Example: Distribute matrix → Multiply → Rotate → Return to original position
It's like programming, but the "execution" is continuous dynamics! We're building a neural compiler!
June 4, 2025 at 6:37 PM
Our approach also enables task composition, meaning we can chain operations together!
Example: Distribute matrix → Multiply → Rotate → Return to original position
It's like programming, but the "execution" is continuous dynamics! We're building a neural compiler!
Example: Distribute matrix → Multiply → Rotate → Return to original position
It's like programming, but the "execution" is continuous dynamics! We're building a neural compiler!
More on the MNIST demo: We pre-train a linear classifier, decompose the 784×10 matrix multiplication into smaller blocks, and let the NCA process them in PARALLEL!
Emulated accuracy: 60% (vs 84%), not perfect due to error accumulation, but it WORKS! This is a neural network running inside a CA! 🤯
Emulated accuracy: 60% (vs 84%), not perfect due to error accumulation, but it WORKS! This is a neural network running inside a CA! 🤯
June 4, 2025 at 6:37 PM
More on the MNIST demo: We pre-train a linear classifier, decompose the 784×10 matrix multiplication into smaller blocks, and let the NCA process them in PARALLEL!
Emulated accuracy: 60% (vs 84%), not perfect due to error accumulation, but it WORKS! This is a neural network running inside a CA! 🤯
Emulated accuracy: 60% (vs 84%), not perfect due to error accumulation, but it WORKS! This is a neural network running inside a CA! 🤯
Through this framework, we are able to successfully train on a variety of computational primitives of matrix arithmetics.
Here is an example of the NCA performing Matrix Translation + Rotation directly in its computational state (and, by design, only using local interactions to do so) !
Here is an example of the NCA performing Matrix Translation + Rotation directly in its computational state (and, by design, only using local interactions to do so) !
June 4, 2025 at 6:37 PM
Through this framework, we are able to successfully train on a variety of computational primitives of matrix arithmetics.
Here is an example of the NCA performing Matrix Translation + Rotation directly in its computational state (and, by design, only using local interactions to do so) !
Here is an example of the NCA performing Matrix Translation + Rotation directly in its computational state (and, by design, only using local interactions to do so) !
We propose a novel framework that disentangles the concepts of “hardware” and “state” within the NCA. For us:
- Rules = "Physics" dictating state transitions.
- Hardware = Immutable + heterogeneous scaffold guiding the CA behaviour.
- State = Dynamic physical & computational substrate.
- Rules = "Physics" dictating state transitions.
- Hardware = Immutable + heterogeneous scaffold guiding the CA behaviour.
- State = Dynamic physical & computational substrate.
June 4, 2025 at 6:25 PM
We propose a novel framework that disentangles the concepts of “hardware” and “state” within the NCA. For us:
- Rules = "Physics" dictating state transitions.
- Hardware = Immutable + heterogeneous scaffold guiding the CA behaviour.
- State = Dynamic physical & computational substrate.
- Rules = "Physics" dictating state transitions.
- Hardware = Immutable + heterogeneous scaffold guiding the CA behaviour.
- State = Dynamic physical & computational substrate.
Another exciting part? We developed a parallel *fully identical* off-chip simulator, opening doors for hybrid training approaches. This is particularly exciting for online learning scenarios, where networks train on continuous data streams - crucial for embedded systems and autonomous agents. (6/8)
January 28, 2025 at 8:07 PM
Another exciting part? We developed a parallel *fully identical* off-chip simulator, opening doors for hybrid training approaches. This is particularly exciting for online learning scenarios, where networks train on continuous data streams - crucial for embedded systems and autonomous agents. (6/8)
Our Solution: We implemented EventProp on SpiNNaker2, computing exact gradients through sparse error signal communication between neurons. The key innovation? Maintaining temporal sparsity during both forward AND backward passes - a significant departure from traditional BPTT. (4/8)
January 28, 2025 at 8:07 PM
Our Solution: We implemented EventProp on SpiNNaker2, computing exact gradients through sparse error signal communication between neurons. The key innovation? Maintaining temporal sparsity during both forward AND backward passes - a significant departure from traditional BPTT. (4/8)