sqIRL Lab
banner
sqirllab.bsky.social
sqIRL Lab
@sqirllab.bsky.social
We are "squIRreL", the Interpretable Representation Learning Lab based at IDLab - University of Antwerp & imec.
Research Areas: #RepresentationLearning, Model #Interpretability, #explainability, #DeepLearning
#ML #AI #XAI #mechinterp
Thanks for the #Flanders AI Research Program (FAIR) for supporting this collaboration and to the involved persons for the fruitful collaboration.

#Neuromorphic #hyperdimensional #interpretability #Explainability #VSA #xai #AI #ML #FAIR #UAntwerp #imec
November 4, 2025 at 11:46 AM
#HDC models aim to be an energy-efficient alternative to current #AI systems and thanks to the efforts of our collaborators, their decision-making process is now more interpretable.

#Neuromorphic #hyperdimensional #interpretability #Explainability #VSA #xai #AI #ML #FAIR #UAntwerp #imec
Redirecting
doi.org
November 4, 2025 at 11:46 AM
Thanks to our collaborators from the #VUB, Ward Gauderis and Geraint Wiggins; as well as #sqIRL members Thomas Dooms and José Oramas for the nice collaboration. #UAntwerp #FlandersAI #FAIR
October 17, 2025 at 8:03 AM
Benjamin Vandersmissen (25/04 evening) will show the effects that using a twin network has on learning processes and share insights on how TNA leads to superior predictive performance in a number of tasks for several architectures. #deeplearning #ML #ICLR2025 #sqIRL
openreview.net/forum?id=TEm...
Improving Neural Network Accuracy by Concurrently Training with a...
Recently within Spiking Neural Networks, a method called Twin Network Augmentation (TNA) has been introduced. This technique claims to improve the validation accuracy of a Spiking Neural Network...
openreview.net
April 24, 2025 at 8:05 AM
Thomas Dooms will show how to bilinear MLPs can server as more transparent component that provides a better lens to study the relationships between inputs, outputs and the weights that define the models. #mechinterp #interpretability #ML #AI #XAI #ICLR2025 #sqIRL
openreview.net/forum?id=gI0...
Bilinear MLPs enable weight-based mechanistic interpretability
A mechanistic understanding of how MLPs do computation in deep neural net- works remains elusive. Current interpretability work can extract features from hidden activations over an input dataset...
openreview.net
April 24, 2025 at 8:05 AM
Bilinear MLPs Enable Weight-based Mechanistic Interpretability
M. Pearce, T. Dooms, A. Rigg, J. Oramas, L. Sharkey

We show that bilinear layers can serve as an interpretable replacement for current activation functions; enabling weight-based interpretability.

preprint: arxiv.org/abs/2410.08417
Bilinear MLPs enable weight-based mechanistic interpretability
A mechanistic understanding of how MLPs do computation in deep neural networks remains elusive. Current interpretability work can extract features from hidden activations over an input dataset but gen...
arxiv.org
January 23, 2025 at 10:12 PM
Improving Neural Network Accuracy by Concurrently Training with a Twin Network
B. Vandersmissen, L. Deckers, J. Oramas

We show the effectiveness of TNA lies on a better exploration of the parameter space and the learning of more robust and diverse features

preprint: openreview.net/forum?id=TEm...
Improving Neural Network Accuracy by Concurrently Training with a...
Recently within Spiking Neural Networks, a method called Twin Network Augmentation (TNA) has been introduced. This technique claims to improve the validation accuracy of a Spiking Neural Network...
openreview.net
January 23, 2025 at 10:12 PM
Reposted by sqIRL Lab
Neural networks are not black boxes. They're the opposite of black boxes: we have extensive access to their internals.

I think people have accepted this framing so innately that they've forgotten it's not true and it even warps how they do experiments.
December 9, 2024 at 7:18 PM