T. Anderson Keller
banner
andykeller.bsky.social
T. Anderson Keller
@andykeller.bsky.social
Postdoctoral Fellow at Harvard Kempner Institute. Trying to bring natural structure to artificial neural representations. Prev: PhD at UvA. Intern @ Apple MLR, Work @ Intel Nervana
As a first step towards the answer, we used the Tetris-like dataset and variants of MNIST to compare the semantic segmentation ability of these wave-based models (seen below) with two relevant baselines: Deep CNNs w/ large (full-image) receptive fields, and small U-Nets.

10/14
March 10, 2025 at 3:34 PM
Was this just due to using Fourier transforms for semantic readouts, or wave-biased architectures? No! The same models with LSTM dynamics and a linear readout of the hidden-state timeseries still learned waves when trying to semantically segment images of Tetris-like blocks!

8/14
March 10, 2025 at 3:34 PM
We made wave dynamics flexible by adding learned damping and natural frequency encoders, allowing hidden state dynamics to adapt based on the input stimulus. On simple polygon images, we found the model learned to use these parameters to produce shape-specific wave dynamics:

6/14
March 10, 2025 at 3:34 PM
Inspired by Mark Kac’s famous question, "Can one hear the shape of a drum?" we thought: Maybe a neural network can use wave dynamics to integrate spatial information and effectively "hear" visual shapes... To test this, we tried feeding images of squares to a wave-based RNN:

3/14
March 10, 2025 at 3:34 PM
In the physical world, almost all information is transmitted through traveling waves -- why should it be any different in your neural network?

Super excited to share recent work with the brilliant @mozesjacobs.bsky.social: "Traveling Waves Integrate Spatial Information Through Time"

1/14
March 10, 2025 at 3:34 PM