Will Smith
banner
willsmithvision.bsky.social
Will Smith
@willsmithvision.bsky.social
Professor in Computer Vision at the University of York, vision/graphics/ML research, Boro @mfc.co.uk fan and climber
📍York, UK
🔗 https://www-users.york.ac.uk/~waps101/
Or maybe in the style of Dr Seuss. Or Shakespeare.

If so, did it work? Asking for a friend...
May 11, 2025 at 9:56 AM
The date is just an unfortunate coincidence. This is a genuine SIGBOVIK submission and the full working source code is in our arXiv repository.
April 1, 2025 at 7:24 PM
This has been a massively entertaining and challenging side project with @jadgardner.bsky.social and @willrowan.bsky.social over the past year and (subject to rigorous peer review) will be appearing at SIGBOVIK 2025.
April 1, 2025 at 12:30 PM
@overleaf.com becomes your cloud compute provider. This should level the playing field within the community as both industry and academia will have to work within the same compute limits.
April 1, 2025 at 12:30 PM
NeuRaLaTeX brings many side benefits. Your paper source code is also your method source code. No more "Code coming soon" on github. If the paper is on arxiv, the paper source link *is* the method source code! No need to remember those silly git commands anymore!
April 1, 2025 at 12:30 PM
In case your arXiv submission is timing out, we've also implemented checkpointing so you can include your trained model weights as a text file with your paper source.
April 1, 2025 at 12:30 PM
In a NeuRaLaTeX paper, when you compile your PDF, a neural network is constructed, trained and evaluated with all results and figures generated dynamically. We've worked hard on efficiency and the NeuRaLaTeX paper only took 48 hours to compile.
April 1, 2025 at 12:30 PM
Our neural network library extends the autograd engine to create neurons, linear layers and MLPs. Constructing an MLP and making a forward pass is as easy as this.
April 1, 2025 at 12:30 PM
Just like Karpathy's micrograd, NeuRaLaTeX implements backpropagation (reverse-mode autodiff) over a dynamically built DAG. However, while micrograd uses 150 lines of Python, NeuRaLaTeX uses around 1,100 line of pure LaTeX, making it about 700% better.
April 1, 2025 at 12:30 PM
Not only that, but LaTeX has elegant and intuitive programming syntax. For example, creating a new Value object only uses the word "expand" four times:
\expanded{
\noexpand\pgfoonew\expandafter\noexpand
\csname #2\endcsname=
new Value(\newdata,{\selfid,\otherid},*,0)
}
April 1, 2025 at 12:30 PM
Wait, what?

Well, LaTeX is itself a Turing-complete programming language. When you "compile" a LaTeX document, really you are executing a program written in LaTeX. This program need not only format your paper contents into a PDF but can also perform useful computation.
April 1, 2025 at 12:30 PM
On a similar note: if you have two figures one above the other at the top of one column, move one figure to the top of the other column and you gain the white space between them.
March 7, 2025 at 7:12 PM
I must confess I just immediately copy/pasted the turkey eggs question into ChatGPT to get the answer myself. Great question!
January 17, 2025 at 3:20 PM
Of course, we then put our blog post back through the podcast generator! So now its AI podcast hosts talking about our blog post about themselves talking about our research - how meta is that?! They have some minor existential crises as they discuss that they are themselves LLMs. 4/5
December 18, 2024 at 12:20 PM
On another level, we noticed that it made fundamental mistakes. Often these were wrapped up within clever metaphors that gave a falsely confident impression that it deeply understood the material. We were left wondering what effect a deluge of these accessible but incorrect podcasts might have. 3/5
December 18, 2024 at 12:20 PM
When NotebookLM launched the podcast feature, we both put our own papers through it and chatted about what we thought of it. On one level, we were blown away by the convincing podcast style and the way it seemed to distill complex research into an accessible form. 2/5
December 18, 2024 at 12:20 PM