Sam Power
banner
spmontecarlo.bsky.social
Sam Power
@spmontecarlo.bsky.social
Lecturer in Maths & Stats at Bristol. Interested in probabilistic + numerical computation, statistical modelling + inference. (he / him).

Homepage: https://sites.google.com/view/sp-monte-carlo
Seminar: https://sites.google.com/view/monte-carlo-semina
I'm realising that I get slightly wound up by the vagueness with which the word "inherently" gets used in various mathematical contexts - "inherently sequential", "inherently nonlinear", and so on. Often not clear exactly what is meant, often because such claims are followed by their contradiction.
November 10, 2025 at 7:45 PM
Something which I'd like to gather my thoughts on at some stage is the family of undergraduate maths topics for which I often use their definitions, but rarely use their theorems. It comes up occasionally with students, and it would be good to be able to articulate well on this point.
November 9, 2025 at 11:01 PM
I feel like the "Universal Inference" paper is a very worthwhile read in part because it highlights a particular way in which likelihoods are a special object inferentially, in a way that is pretty difficult to replicate with other strategies.
November 9, 2025 at 10:12 PM
Very cool (from Ehm-Gneiting-Jordan-Krüger, JRSSB 2016): for mean estimation, all consistent scoring rules can be obtained as conic combinations of 'extremal' consistent scoring rules, with an explicit structure. Similar results hold for quantiles (and perhaps other tasks as well!).
November 9, 2025 at 5:04 PM
Me and the gang
November 9, 2025 at 2:26 PM
Silly question: are there 'standard' neural networks based on matrix-matrix multiplies? i.e. instead of propagating a vector with matvecs and activations, propagating a matrix with matmats and activations?
November 6, 2025 at 11:00 AM
another hit: "wombling"
November 5, 2025 at 6:42 PM
Another paper round-up - many new, many not; some read-in-full, many not. All interesting! As with the last bundle, summaries will be kept brief and hopefully stoke curiosity, rather than providing answers, and the ordering doesn't reflect anything informative.
November 5, 2025 at 9:55 AM
A bit random, but I find that whenever the 'critique' is raised that the KL divergence is not a distance, it is pretty rare that there is a strong case given for why this is actually a problem.

(It's of course reasonable to mention such things as a warning, etc.; this is not my concern)
November 4, 2025 at 7:32 PM
"tsunameters" is a banger; shout out spatial statistics
November 4, 2025 at 3:13 PM
I appreciate that this is technically a comprehensible sentence (and pretty benign in terms of how complex the concepts are), but the density of jargon did hit me with that vague feeling of "am I having a stroke".
November 4, 2025 at 10:35 AM
Well worth a read in general. Randomised Numerical Linear Algebra is a super cool field, and I have the impression that even its more basic results are not as widely known as they ought to be. Hopefully, this will start to change gradually (maybe through some well-chosen applications).
November 2, 2025 at 4:43 PM
I find it super funny to see how the terms { "old-school", "classical", etc. } get used in ML circles occasionally. It's healthiest to assume a bit of self-awareness in many of these cases, but regardless, it can be pretty striking to hear them used to describe things that are e.g. 10-15 years old.
October 31, 2025 at 8:59 AM
Reposted by Sam Power
A bit of blog (again, dusting off some old notes with a cute observation):

hackmd.io/@sp-monte-ca...
"Attention as Deconvolution"
This is indeed in the works (after combing through some of my folders of notes), but in the interim, I can share a few things which I've put up directly as .pdf files on my website (sites.google.com/view/sp-mont...), rather than as blog posts per se. Notes 3-5 are 'new'.
October 26, 2025 at 4:30 PM
There is something quite clean about distilling a large range of statistical principles down to

"Well if that _were_ the case, then *this* would really be quite unlikely."
October 30, 2025 at 9:52 AM
now *that* is a talk title
October 28, 2025 at 10:56 AM
Reposted by Sam Power
The 1st OWABI talk of the Season will be given by François-Xavier Briol (University College London). who will talk about "Multilevel neural simulation-based inference".
Multilevel neural simulation-based inference
Neural simulation-based inference (SBI) is a popular set of methods for Bayesian inference when models are only available in the form of a simulator. These methods are widely used in the sciences...
arxiv.org
October 27, 2025 at 12:06 PM
Let me advertise a bit our Online Monte Carlo seminar:

This coming Tuesday, we have Giorgos Vasdekis speaking on some very interesting recent work.

Moreover, we have confirmed our speaker line-up through until December - very exciting!

See sites.google.com/view/monte-c... for further details.
October 26, 2025 at 4:35 PM
A bit of blog (again, dusting off some old notes with a cute observation):

hackmd.io/@sp-monte-ca...
"Attention as Deconvolution"
This is indeed in the works (after combing through some of my folders of notes), but in the interim, I can share a few things which I've put up directly as .pdf files on my website (sites.google.com/view/sp-mont...), rather than as blog posts per se. Notes 3-5 are 'new'.
October 26, 2025 at 4:30 PM
I have been thinking recently that it's peculiar that classical numerical analysis of ODEs was rarely discussing (according to e.g. textbooks) the role of dimensionality in design and comparison of methods, analysis, and the like, compared to order, stability, etc.
October 25, 2025 at 7:38 PM
A fun interpretation of some recent work:

Consider a data stream { X_1, X_2, ... } for which you entertain some parametric model P_θ.

Suppose also that you have your own (flexible) predictive machine Q, which you can use to reason about the distribution of X_t, on the basis of { X_s : s < t }.
October 25, 2025 at 1:01 PM
Some basic thoughts about why convexity can be quite natural:

1) having lower bounds on functions is often useful
2) having families of lower bounds can be even more useful
3) especially when you can optimise over the whole family
4) 'linear-type' lower bounds tend to be extra friendly
October 24, 2025 at 10:36 AM
funky
October 22, 2025 at 8:19 PM
Disrespecting my hip-hop rivals by criticising their "ancient, low-entropy flows".
October 21, 2025 at 9:37 PM
There's a very nice review up on arXiv today on the topic of rational approximation (I will add a link later). These things are always very nice to read about, though it has an uncommon feature that it seems to really be about approximation only. Let me elaborate slightly:
October 21, 2025 at 8:35 AM