Anand Gopalakrishnan
banner
agopal42.bsky.social
Anand Gopalakrishnan
@agopal42.bsky.social
Postdoc at Harvard with @yilundu.bsky.social and @gershbrain.bsky.social PhD from IDSIA with Jürgen Schmidhuber. Previously: Apple MLR, Amazon AWS AI Lab. 7\.
agopal42.github.io
Pinned
Excited to present "Recurrent Complex-Weighted Autoencoders for Unsupervised Object Discovery" at #NeurIPS2024! Poster: #3707 4:30pm on Thursday.
TL;DR: Our model, SynCx, greatly simplifies the inductive biases and training procedures of current state-of-the-art synchrony models. Thread 👇 1/x.
Reposted by Anand Gopalakrishnan
With some trepidation, I'm putting this out into the world:
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.

My hope is that this will be a living document, continuously improved as I get feedback.
January 9, 2026 at 1:27 AM
Reposted by Anand Gopalakrishnan
Goal selection through the lens of subjective functions:
arxiv.org/abs/2512.15948
I welcome any feedback on these preliminary ideas.
Subjective functions
Where do objective functions come from? How do we select what goals to pursue? Human intelligence is adept at synthesizing new objective functions on the fly. How does this work, and can we endow arti...
arxiv.org
December 19, 2025 at 3:15 AM
Paper: arxiv.org/abs/2405.17283
Code: github.com/agopal42/syncx
Joint work with Aleksandar Stanic, Jürgen Schmidhuber and Michael Mozer.
Hope to see you all at our poster at #NeurIPS2024! 10/x
Recurrent Complex-Weighted Autoencoders for Unsupervised Object Discovery
Current state-of-the-art synchrony-based models encode object bindings with complex-valued activations and compute with real-valued weights in feedforward architectures. We argue for the computational...
arxiv.org
December 4, 2024 at 6:49 PM
Phase synchronization in SynCx towards objects is more robust compared to baselines. It can successfully separate similarly colored objects, which is a common failure mode of other synchrony models that simply rely on color as a shortcut feature for grouping. 9/x
December 4, 2024 at 6:49 PM
SynCx outperforms current state-of-the-art unsupervised synchrony-based models on standard multi-object datasets while using between 6-23x fewer model parameters compared to the baseline models. 8/x
December 4, 2024 at 6:49 PM
Our model does not need additional inductive biases (gating mechanisms), strong supervision (depth masks) and/or contrastive training as used by current state-of-the-art synchrony models to achieve phase synchronization towards objects in a fully unsupervised way. 7/x
December 4, 2024 at 6:49 PM
SynCx processes complex-valued inputs at every layer using complex-valued weights. It is trained to reconstruct the input image at every iteration using the output magnitudes. Output phases are fed back as input to the next step with input magnitudes clamped to the image. 6/x
December 4, 2024 at 6:49 PM
Hidden units in such a system must activate based on the presence of features (magnitudes) but also consider their relative phases. Matrix-vector products between complex-valued weights and complex-valued activations are a natural way to implement such functionality. 5/x
December 4, 2024 at 6:49 PM
This is a conceptual flaw in current synchrony models all of which use feedforward convolutional nets but we can solve this in an iterative fashion. Starting with random phases, hidden units compute phase updates to propagate local constraints to reach a stable configuration. 4/x
December 4, 2024 at 6:49 PM
Green and red circles highlight junctions that belong to the same and different objects respectively. Here, we cannot decide which junctions belong to which object using only the local features as they are indistinguishable from one another in the two cases. 3/x
December 4, 2024 at 6:49 PM
We argue for the importance of iterative computation (recurrence) and complex-valued weights to achieve phase synchronization in activations. To build some intuition look at the 3 shapes (T, H, and overlapping squares) made of horizontal and vertical bars. 2/x
December 4, 2024 at 6:49 PM
Excited to present "Recurrent Complex-Weighted Autoencoders for Unsupervised Object Discovery" at #NeurIPS2024! Poster: #3707 4:30pm on Thursday.
TL;DR: Our model, SynCx, greatly simplifies the inductive biases and training procedures of current state-of-the-art synchrony models. Thread 👇 1/x.
December 4, 2024 at 6:49 PM
Env.reset()
November 17, 2024 at 10:40 PM