Arno Solin
arnosolin.bsky.social
Arno Solin
@arnosolin.bsky.social

Associate Professor in Machine Learning, Aalto University. ELLIS Scholar.
http://arno.solin.fi

Computer science 68%
Engineering 23%

Yes. The easiest way to find it will be on the website virtual.aistats.org We are in the process of adding material there and will add a link.
2026 Conference
virtual.aistats.org

We will go public with it as soon as everything is set up with the venue.

I'm thrilled to be Program Chairing AISTATS 2026 together with Aaditya Ramdas. AISTATS has a special feel to it, and it has been described by many colleagues as their "favourite conference". We aim to preserve that spirit while introducing some fresh elements for 2026. [3/3]

Accepted papers will be presented in person in Morocco, May 2–5, 2026. The full Call for Papers is available here: virtual.aistats.org/Conferences/... [2/3]
Call for Papers
virtual.aistats.org

Reposted by Gilles Louppe

📣 Please share: We invite submissions to the 29th International Conference on Artificial Intelligence and Statistics (#AISTATS 2026) and welcome paper submissions at the intersection of AI, machine learning, statistics, and related areas. [1/3]

Reposted by Arno Solin

Remember that computers use bitstrings to represent numbers? We exploit this in our recent @auai.org paper and introduce #BitVI.

#BitVI directly learns an approximation in the space of bitstring representations, thus, capturing complex distributions under varying numerical precision regimes.

Check our #CVPR paper and project page for more results, videos, and code!
📄 arxiv.org/abs/2411.19756
🎈 aaltoml.github.io/desplat/
DeSplat: Decomposed Gaussian Splatting for Distractor-Free Rendering
DeSplat: Decomposed Gaussian Splatting for Distractor-Free Rendering
aaltoml.github.io

Qualitative visualization of static distractor elements achieved by our model, DeSplat. [3/n]

Compared to Splatfacto we model and can ignore distractors to improve 3DGS reconstruction quality. [2/n]

Real-world #3DGS scenes are messy—occluders, moving objects, and clutter often ruin reconstruction. This #CVPR2025 paper presents DeSplat, which separates static scene content from distractors, all without requiring external semantic models. [1/n]

I’m visiting the Isaac Newton Institute for Mathematical Sciences in Cambridge this week.

I’m giving an invited talk in the ”Calibrating prediction uncertainty : statistics and machine learning perspectives” workshop on Thursday.

Our method addresses the eminent question of probabilistic modelling in quantized large-scale ML models. See the workshop paper below. [3/3]

📄 Paper: openreview.net/forum?id=Sai...
Are Your Continuous Approximations Really Continuous? Reimagining...
Efficiently performing probabilistic inference in large models is a significant challenge due to the high computational demands and continuous nature of the model parameters. At the same time, the...
openreview.net

We introduce BitVI, a novel approach for variational inference with discrete bitstring representations of continuous parameters. We use a deterministic probabilistic circuit structure to model the distribution over bitstrings, allowing for exact and efficient probabilistic inference. [2/3]

Have you thought that in computer memory model weights are given in terms of discrete values in any case. Thus, why not do probabilistic inference on the discrete (quantized) parameters. @trappmartin.bsky.social is presenting our work at #AABI2025 today. [1/3]

We show that externalising reasoning as a DAG at test time leads to more accurate, efficient multi-hop retrieval – and integrates seamlessly with RAG systems like Self-RAG.
📄 Paper: openreview.net/pdf?id=gi9aq...
3/3
openreview.net

This work was born out of Prakhar's internship with Microsoft Research (\w Sukruta Prakash Midigeshi, Gaurav Sinha, Arno Solin, Nagarajan Natarajan, and Amit Sharma).
2/3

Excited to share "Plan*RAG: Efficient Test-Time Planning for Retrieval Augmented Generation", presented at the #ICLR2025 "Workshop on Reasoning and Planning for LLMs" on Monday! 🚀
1/3

Our TMLR-to-ICLR poster "Exploiting Hankel-Toeplitz Structures for Fast Computation of Kernel Precision Matrices" (Frida Viset, Anton Kullberg, Frederiek Wesel, Arno Solin)
🗓️ Hall 3 + Hall 2B #416, Fri 25 Apr 10 a.m. +08 — 12:30 p.m. +08
📄 Preprint: arxiv.org/abs/2408.02346

Our #ICLR2025 poster "Equivariant Denoisers Cannot Copy Graphs: Align Your Graph Diffusion Models" (Najwa Laabid, Severi Rissanen, Markus Heinonen, Arno Solin, Vikas Garg)
🗓️ Hall 3 + Hall 2B #194, Fri 25 Apr 3 p.m. +08 — 5:30 p.m. +08
📄 Preprint: arxiv.org/abs/2405.17656

Our #ICLR2025 poster "Streamlining Prediction in Bayesian Deep Learning" (Rui Li · Marcus Klasson, Arno Solin, Martin Trapp)
🗓️ Hall 3 + Hall 2B #413, Fri 25 Apr 10 a.m. +08 — 12:30 p.m. +08
📄 Preprint: arxiv.org/abs/2411.18425

Our #ICLR2025 poster "Discrete Codebook World Models for Continuous Control" (Aidan Scannell, Mohammadreza Nakhaeinezhadfard, Kalle Kujanpää, Yi Zhao, Kevin Luck, Arno Solin, Joni Pajarinen)
🗓️ Hall 3 + Hall 2B #415, Thu 24 Apr 10 a.m. +08 — 12:30 p.m. +08
📄 Preprint: arxiv.org/abs/2503.00653

Our #ICLR2025 poster "Free Hunch: Denoiser Covariance Estimation for Diffusion Models Without Extra Costs" (Severi Rissanen, Markus Heinonen, Arno Solin)
🗓️ Hall 3 + Hall 2B #140, Thu 24 Apr 3 p.m. +08 — 5:30 p.m. +08
📄 Preprint: arxiv.org/abs/2410.11149

Exploiting Hankel-Toeplitz Structures for Fast Computation of Kernel Precision Matrices
Frida Viset · Anton Kullberg · Frederiek Wesel · Arno Solin
Hall 3 + Hall 2B #416
🗓️ Fri 25 Apr 10 a.m. +08 — 12:30 p.m. +08
📄 arxiv.org/abs/2408.02346
Exploiting Hankel-Toeplitz Structures for Fast Computation of Kernel Precision Matrices
The Hilbert-space Gaussian Process (HGP) approach offers a hyperparameter-independent basis function approximation for speeding up Gaussian Process (GP) inference by projecting the GP onto M basis fun...
arxiv.org

Equivariant Denoisers Cannot Copy Graphs: Align Your Graph Diffusion Models
Najwa Laabid · Severi Rissanen · Markus Heinonen · Arno Solin · Vikas Garg
Hall 3 + Hall 2B #194
🗓️ Fri 25 Apr 3 p.m. +08 — 5:30 p.m. +08
📄 arxiv.org/abs/2405.17656
Alignment is Key for Applying Diffusion Models to Retrosynthesis
Retrosynthesis, the task of identifying precursors for a given molecule, can be naturally framed as a conditional graph generation task. Diffusion models are a particularly promising modelling approac...
arxiv.org

Streamlining Prediction in Bayesian Deep Learning
Rui Li · Marcus Klasson · Arno Solin · Martin Trapp
Hall 3 + Hall 2B #413
🗓️ Fri 25 Apr 10 a.m. +08 — 12:30 p.m. +08
📄 arxiv.org/abs/2411.18425
Streamlining Prediction in Bayesian Deep Learning
The rising interest in Bayesian deep learning (BDL) has led to a plethora of methods for estimating the posterior distribution. However, efficient computation of inferences, such as predictions, has b...
arxiv.org

Discrete Codebook World Models for Continuous Control
Aidan Scannell · Mohammadreza Nakhaeinezhadfard · Kalle Kujanpää · Yi Zhao · Kevin Luck · Arno Solin · Joni Pajarinen
Hall 3 + Hall 2B #415
🗓️ Thu 24 Apr 10 a.m. +08 — 12:30 p.m. +08
📄 arxiv.org/abs/2503.00653
Discrete Codebook World Models for Continuous Control
In reinforcement learning (RL), world models serve as internal simulators, enabling agents to predict environment dynamics and future outcomes in order to make informed decisions. While previous appro...
arxiv.org

Free Hunch: Denoiser Covariance Estimation for Diffusion Models Without Extra Costs
Severi Rissanen · Markus Heinonen · Arno Solin
Hall 3 + Hall 2B #140
🗓️ Thu 24 Apr 3 p.m. +08 — 5:30 p.m. +08
📄 arxiv.org/abs/2410.11149
Free Hunch: Denoiser Covariance Estimation for Diffusion Models Without Extra Costs
The covariance for clean data given a noisy observation is an important quantity in many training-free guided generation methods for diffusion models. Current methods require heavy test-time computati...
arxiv.org

This week, we are presenting five papers at the main conference of the Thirteenth International Conference on Learning Representations (#ICLR2025) in Singapore. You can find my research group members and collaborators at the following posters.

We accept both regular papers (that will follow the CVPR format, published in proceedings) and extended abstracts (short max 4-page papers, not published in proceedings). Submission deadline: March 14th, 2025.