hchateaulaurent.bsky.social
@hchateaulaurent.bsky.social
...where a trial is correct when the retrieval output is closer to the correct memory than to any other memory. In this case, the Max separation function yields the best results. We also show that k-Max can almost always be outperformed by softmax.
December 12, 2024 at 6:45 PM
We empirically evaluate the associative memory performance of DND on MNIST, CIFAR10 and Tiny ImageNet. We show that performance can be improved by replacing the Euclidean similarity by Manhattan similarity.
December 12, 2024 at 6:45 PM
...evaluate the Q-value from a sensory observation. We thus derive two energy functions for the DND. The implications are twofold: DND can store any vector-based information (e.g. ImageNet images) and other (better?) UHN instances could be used to improve NEC in RL tasks.
December 12, 2024 at 6:45 PM
The UHN framework (Millidge et al. 2022) states that Hopfield nets and many other models successively apply similarity, separation and projection operations to retrieve a memory from a cue. We show that the DND is an instance of this framework, performing similar operations to...
December 12, 2024 at 6:45 PM
🚀🥳 Thrilled to share our #Neurips2024 paper: Relating Hopfield Networks to Episodic Control! We show that the Differentiable Neural Dictionary (DND) from Neural Episodic Control (NEC) can be thought of as a Universal Hopfield Network (UHN) (1/n)
openreview.net/forum?id=59D...
December 12, 2024 at 6:45 PM