Jens E. Pedersen
banner
jegp.bsky.social
Jens E. Pedersen
@jegp.bsky.social
Researching neuromorphic computing. Curious about abstractions. Cares about FOSS.
Author of Neuromorphic Intermediate Representation in NatComm: https://www.nature.com/articles/s41467-024-52259-9
September 5, 2025 at 9:05 PM
My humble hope: this could be a turning point for SNNs to excel in what they were designed for: sparse, spatio-temporal signal processing.

The best part? Everything is open-source. Steal it, modify it, send it to hardware with the Neuromorphic Intermediate Representation - just please cite us :-)
September 5, 2025 at 9:05 PM
Can you unpack this a bit?
Some argue that large models work well in machine learning because of the mysterious fact that gradient descent improves at scale, despite non-convexity (arxiv.org/pdf/2105.04026).
Would you agree? If so, how does this apply to simulations?
arxiv.org
December 20, 2024 at 10:36 AM
Ah, yes, thank you. I initially read the quote to mean that physics restrict the algorithm, not that physics IS the algorithm.
For finding solutions, as you write, this distinction is important. Restrictions have to be baked in from the beginning, otherwise any “solution” will be meaningless.
December 20, 2024 at 10:25 AM
This is actually interesting. Did she believe that the role of silicon in VLSI systems is similar to the role of neural substrates in nervous systems?

If so, I would agree with Brad that I don't see the big difference. But simulations will always be a poor man's approximation
December 18, 2024 at 11:02 AM
Why stop there? If I had something to sell, I would want to hijack every neuromodulator I could get my hands on. Eternal chemical bliss 🏴‍☠️
December 16, 2024 at 2:53 PM
That's a great point! We cannot equate the hardware with the model. "NeuroAI" is indeed not a model.

I wonder whether the ambiguity would stand if we had a solid understanding of how the substrate related to the algorithm. Where does physics/hardware stop and where does computation begin?
December 16, 2024 at 1:50 PM
Oh dear, that's terrible and borderline denigrative 😬
December 16, 2024 at 1:37 PM
I'm wondering how to address this. Isn't part of the reason why some words remain less viscous that they have strong definitions? Could it be that part of the problem is that #NeuroAI is too vague? What if we need better definitions?
We could start with "intelligence"...
December 16, 2024 at 1:24 PM
I agree that languages inevitably evolve, but at the same time words have to *mean* something.
Personally, I consider "neuromorphic" to apply to concepts outside hardware. I am open to changing my mind, but there have been so many conflicting takes on this that I am, frankly, confused.
December 16, 2024 at 1:21 PM
It seems like a neat paper on DSP, but could you tell me how this relates to continous computation?
December 8, 2024 at 7:24 PM
I think that's exactly the right mindset. It'll be hard to balance concerns when the new wave of hardware hits, but sticking to the "fast weights" bit is crucial. Nice.
My hunch still is that this requires a continuous representation, but I may be wrong 🤔 maybe we should do a survey?
November 24, 2024 at 7:54 AM
open-neuromorphic.org ☺️
November 24, 2024 at 7:50 AM
I'm still not sold on the MLIR angle. It may help integration of existing models, but MLIR is inherently digital. Wouldn't that hinder the computational expressivity of mixed-signal hardware?
November 23, 2024 at 6:39 PM