Simon J.D. Prince
banner
simonprinceai.bsky.social
Simon J.D. Prince
@simonprinceai.bsky.social
Author of "Understanding Deep Learning". http://udlbook.com
Here is the 4th instalment in my series on ODEs and SDEs in machine learning. I previously discussed closed-form solutions for ODEs, but often there is no known solution. This article considers numerical methods, which can approximate the solution of any ODE.

rbcborealis.com/research-blo...
November 14, 2025 at 7:36 PM
Wow. Understanding Deep Learning has now been downloaded half a million times. Thank you so much everyone! I was overjoyed when it hit 100k so this is completely mindblowing. I'm so thrilled that people are finding it useful.
June 23, 2025 at 8:52 PM
Exciting news! @travislacroix.bsky.social (who co-wrote the chapter on ethics in Understand Deep Learning) has a new book out "AI and Value Alignment". Recommended for anyone serious about ethics and AI. Details at:

value-alignment.github.io

Buy it here:

broadviewpress.com/product/arti...
April 2, 2025 at 8:21 PM
Here is part III of my series for @RBCBorealis on ODEs and SDEs in machine learning. This article develops methods for solving first-order ODEs in closed form; we divide ODEs into different families and develop approaches to solve each family.

rbcborealis.com/research-blo...
February 20, 2025 at 8:39 PM
Here's the 2nd part of my series on ODEs and SDEs in ML. This article introduces ODEs and is suitable for novices:

rbcborealis.com/research-blo...

We describe ODEs, vector ODEs and PDEs and categorize ODEs by how their solutions are related. We describe conditions for an ODE to have a solution.
February 18, 2025 at 9:25 PM
I'm starting a series of articles on ODEs and SDEs in ML for RBCBorealis. I'll describe ODEs and SDEs from first principles without assuming prior knowledge and present applications including neural ODEs, and diffusion models.

Part I: rbcborealis.com/research-blo.... Follow for parts II & III.
February 5, 2025 at 7:47 PM
These blogs for RBC Borealis consider infinite-width neural networks from 4 viewpoints. We use gradient descent or a Bayesian approach, and, for each, we focus on either the weights or output function. This leads to the Neural Tangent Kernel, Bayesian NNs and NNGPs. Enjoy!

tinyurl.com/yfsts565
February 3, 2025 at 9:40 PM
Learning or teaching from my book (udlbook.com)? I have now added the complete bibfile (which is accurate and took ages to make) and the LaTeX for all of the equations (helpful if you are making slides).
Understanding Deep Learning
udlbook.com
January 23, 2025 at 9:58 PM
Boris Meinardus: How I'd learn ML in 2025 (if I could start over) www.youtube.com/watch?v=_xIw....

(me too 😄)
How I'd learn ML in 2025 (if I could start over)
YouTube video by Boris Meinardus
www.youtube.com
January 5, 2025 at 9:25 PM
Tutorial 4 of 4 on Bayesian methods in ML for RBC Borealis
concerns Neural Network Gaussian Processes:

rbcborealis.com/research-blo...

Think your network might perform better if you increased the width? NNGPs are networks with INFINITE width! Includes code and links to background info on GPs.
December 12, 2024 at 3:13 PM
Blog 3 of 4 on Bayesian methods in ML for RBC Borealis concerns Bayesian Neural Networks (i.e., Bayesian methods for NNs from a parameter-space perspective):

rbcborealis.com/research-blo...

Parts 1 and 2 (linked in article) introduced Bayesian methods. Coming soon in part 4: NNGPs
November 22, 2024 at 8:33 PM