Research Areas: #RepresentationLearning, Model #Interpretability, #explainability, #DeepLearning
#ML #AI #XAI #mechinterp
#Neuromorphic #hyperdimensional #interpretability #Explainability #VSA #xai #AI #ML #FAIR #UAntwerp #imec
#Neuromorphic #hyperdimensional #interpretability #Explainability #VSA #xai #AI #ML #FAIR #UAntwerp #imec
#Neuromorphic #hyperdimensional #interpretability #Explainability #VSA #xai #AI #ML #FAIR #UAntwerp #imec
#Neuromorphic #hyperdimensional #interpretability #Explainability #VSA #xai #AI #ML #FAIR #UAntwerp #imec
openreview.net/forum?id=TEm...
openreview.net/forum?id=TEm...
openreview.net/forum?id=gI0...
openreview.net/forum?id=gI0...
M. Pearce, T. Dooms, A. Rigg, J. Oramas, L. Sharkey
We show that bilinear layers can serve as an interpretable replacement for current activation functions; enabling weight-based interpretability.
preprint: arxiv.org/abs/2410.08417
M. Pearce, T. Dooms, A. Rigg, J. Oramas, L. Sharkey
We show that bilinear layers can serve as an interpretable replacement for current activation functions; enabling weight-based interpretability.
preprint: arxiv.org/abs/2410.08417
B. Vandersmissen, L. Deckers, J. Oramas
We show the effectiveness of TNA lies on a better exploration of the parameter space and the learning of more robust and diverse features
preprint: openreview.net/forum?id=TEm...
B. Vandersmissen, L. Deckers, J. Oramas
We show the effectiveness of TNA lies on a better exploration of the parameter space and the learning of more robust and diverse features
preprint: openreview.net/forum?id=TEm...
I think people have accepted this framing so innately that they've forgotten it's not true and it even warps how they do experiments.
I think people have accepted this framing so innately that they've forgotten it's not true and it even warps how they do experiments.