Rob Cornish
rob-cornish.bsky.social
Rob Cornish
@rob-cornish.bsky.social
Research fellow @ Oxford Statistics Department

jrmcornish.github.io
You can also find an extended abstract of my longer Markov categories paper here: arxiv.org/abs/2412.09469
Neural Network Symmetrisation in Concrete Settings
Cornish (2024) recently gave a general theory of neural network symmetrisation in the abstract context of Markov categories. We give a high-level overview of these results, and their concrete implicat...
arxiv.org
March 6, 2025 at 4:38 AM
If this is of interest to you, here is a recent talk that @paolopmath.bsky.social and I gave for a class at MIT: www.youtube.com/watch?v=ozN4...
ACT4ED Special Lecture - Paolo Perrone, Rob Cornish (Oxford): Markov Categories, Symmetries, & GenAI
YouTube video by Zardini Lab
www.youtube.com
March 6, 2025 at 4:38 AM
This meta-strategy of using category theory to simplify complex reasoning appears useful much more generally, and I think the days of category theory for machine learning are just getting started.
March 6, 2025 at 4:38 AM
Using Markov categories, this earlier paper explained all previous work on symmetrisation as instances of a single common principle (sec 5 of arxiv.org/abs/2406.11814). It also extended this to methodology suited for *stochastic* models, which our ICLR paper applied to diffusions.
Stochastic Neural Network Symmetrisation in Markov Categories
We consider the problem of symmetrising a neural network along a group homomorphism: given a homomorphism $φ: H \to G$, we would like a procedure that converts $H$-equivariant neural networks to $G$-e...
arxiv.org
March 6, 2025 at 4:38 AM
The underlying theory we use here comes from arxiv.org/abs/2406.11814, which studied the problem of symmetrisation using *Markov categories*. Markov categories allow for reasoning about probability in a conceptual, diagrammatic way, while also maintaining full mathematical rigour.
Stochastic Neural Network Symmetrisation in Markov Categories
We consider the problem of symmetrising a neural network along a group homomorphism: given a homomorphism $φ: H \to G$, we would like a procedure that converts $H$-equivariant neural networks to $G$-e...
arxiv.org
March 6, 2025 at 4:38 AM
You can also find an extended abstract of my longer Markov categories paper here: arxiv.org/abs/2412.09469
Neural Network Symmetrisation in Concrete Settings
Cornish (2024) recently gave a general theory of neural network symmetrisation in the abstract context of Markov categories. We give a high-level overview of these results, and their concrete implicat...
arxiv.org
March 6, 2025 at 4:36 AM
If this is of interest to you, here is a recent talk that @paolopmath.bsky.social and I gave for a class at MIT: www.youtube.com/watch?v=ozN4...
ACT4ED Special Lecture - Paolo Perrone, Rob Cornish (Oxford): Markov Categories, Symmetries, & GenAI
YouTube video by Zardini Lab
www.youtube.com
March 6, 2025 at 4:36 AM
This meta-strategy of using category theory to simplify complex reasoning appears useful much more generally, and I think the days of category theory for machine learning are just getting started.
March 6, 2025 at 4:36 AM
Using Markov categories, this earlier paper explained all previous work on symmetrisation as instances of a single common principle (sec 5 of arxiv.org/abs/2406.11814). It also extended this to methodology suited for *stochastic* models, which our ICLR paper applied to diffusions.
Stochastic Neural Network Symmetrisation in Markov Categories
We consider the problem of symmetrising a neural network along a group homomorphism: given a homomorphism $φ: H \to G$, we would like a procedure that converts $H$-equivariant neural networks to $G$-e...
arxiv.org
March 6, 2025 at 4:36 AM
The underlying theory we use here comes from arxiv.org/abs/2406.11814, which studied the problem of symmetrisation using *Markov categories*. Markov categories allow for reasoning about probability in a conceptual, diagrammatic way, while also maintaining full mathematical rigour.
March 6, 2025 at 4:36 AM