Sam Duffield
banner
samduffield.com
Sam Duffield
@samduffield.com
Stats, ML and open-source
New paper on arXiv! And I think it's a good'un 😄

Meet the new Lattice Random Walk (LRW) discretisation for SDEs. It’s radically different from traditional methods like Euler-Maruyama (EM) in that each iteration can only move in discrete steps {-δₓ, 0, δₓ}.
August 29, 2025 at 3:07 PM
Reposted by Sam Duffield
In slides from a recent talk - the { virtuous / vicious } cycle of filtering, smoothing, and parameter estimation in state space models.
May 13, 2025 at 8:05 PM
Me: Hey so where’s good to eat round here?
Singapore taxi driver: Malaysia
April 27, 2025 at 4:57 AM
posteriors 𝞡 published at ICLR!

I’ll be in Singapore next week, let’s chat all things scalable Bayesian learning! 🇸🇬👋
April 18, 2025 at 4:01 PM
Reposted by Sam Duffield
A new instalment of office decor:
April 17, 2025 at 10:42 AM
So simple!

Normally we order our minibatches like
a, b, c, ...., [shuffle], new_a, new_b, new_c, ....
but instead, if we do
a, b, c, ...., [reverse], ...., c, b, a, [shuffle], new_a, new_b, ....

The RMSE of stochastic gradient descent reduces from O(h) to O(h²)

arxiv.org/abs/2504.04274
April 10, 2025 at 8:48 AM
Reposted by Sam Duffield
Sequential Monte Carlo (aka. Particle Socialism?):

"why send one explorer when you can send a whole army of clueless one"
March 29, 2025 at 8:32 AM
Was revisiting the Neural ODEs paper the other day and greatly enjoying.

But I found this super confusing, it’s not an A=B+A statement
March 24, 2025 at 2:13 PM
Reposted by Sam Duffield
Thrillingly (/s), I have today (lightly) updated my website (sites.google.com/view/sp-mont...).

I highlight that I've added
i) links to several slide decks for talks about my research, and
ii) materials related to the (few) short courses which I've given in the past couple of years.

Enjoy!
Sam Power's site
Hello! My name is Sam, and I am a researcher in Statistics. I am currently Lecturer in Statistical Science at the University of Bristol. Prior to this role, I was a Senior Research Associate (also at...
sites.google.com
March 20, 2025 at 7:08 PM
Reposted by Sam Duffield
Hi there! This account will post about the AlgoPerf benchmark and leaderboard updates for faster neural network training via better training algorithms. But let's start with what AlgoPerf is, what we have done so far, and how you can train neural nets ~30% faster.
March 14, 2025 at 8:57 PM
I've been using Cursor and enjoying it but I'm not sure I'm bullish on it for the long run.

One of the best parts about VSCode is the ecosystem of extensions (and that it is open source).

Cursor is already out of sync and having issues with extension compatibility
Unexpected indent when using "Run selection in terminal"
It seems this issue can be resolved by using the latest version of Python extensions when using Python 3.13. Currently in Cursor, we can only use outdated versions (at least 6 months old) of Python, P...
forum.cursor.com
February 15, 2025 at 3:42 PM
Thermo Matrix Exponentials has been published in Physical Review Research 🔥

On a thermodynamic computer, the matrix exponential occurs very naturally through the temporal covariance driven by the noise - a polynomial speedup over digital computers!
Thermodynamic matrix exponentials and thermodynamic parallelism
Thermodynamic computing exploits fluctuations and dissipation in physical systems to efficiently solve various mathematical problems. It was recently shown that certain linear algebra problems can be ...
journals.aps.org
February 12, 2025 at 5:16 PM
Discovered today that LinkedIn has daily “Puzzle games” and they’re quite fun!
January 2, 2025 at 9:14 PM
Thermodynamic Linear Algebra is published!

And I, for one, am delighted to see work from Normal Computing published in Unconventional Computing 😝
Thermodynamic linear algebra - npj Unconventional Computing
npj Unconventional Computing - Thermodynamic linear algebra
www.nature.com
December 10, 2024 at 6:15 PM
This is a lovely read
A common question nowadays: Which is better, diffusion or flow matching? 🤔

Our answer: They’re two sides of the same coin. We wrote a blog post to show how diffusion models and Gaussian flow matching are equivalent. That’s great: It means you can use them interchangeably.
December 3, 2024 at 7:21 AM
Reposted by Sam Duffield
Parallel scans accumulate sequences on GPUs (or other parallel hardware) at logarithmic cost in the size of the input. The canonical example is cumulative sums (a, a+b, a+b+c, ...) from an input (a, b, c, ...), but this is hardly the only use, and, e.g., Kalman filtering can be handled in parallel.
November 24, 2024 at 6:53 PM