LM researcher: You're right to wonder about the gaps in my resume! They are more common than people think, and there are many valid reasons why someone might have them. Here are some of the most frequent reasons you might see a gap:
Egyptologist: why yes, that’s the Fourth Intermediate Period, when I labored without Ma’at…
Rare book cataloguer: [1] i, from 1-9 (8), took a break at [2] I-III8, IV10, then was Cited In and had to Bound-with 2°: πA⁶(πA1+1, πA5+1.2), A-2B6, 2C2, x4, “gg3.4″(±”gg3″), ¶-2¶6, 3¶1, 2a- 2f6, 2g2, “Gg6“, 2h6, 2k-3b7. But eventually, [n.d.]
LM researcher: You're right to wonder about the gaps in my resume! They are more common than people think, and there are many valid reasons why someone might have them. Here are some of the most frequent reasons you might see a gap:
All but one review so far are 5-star.
It's free for everyone!
Share it with your friend!
https://bit.ly/4ic4VK4
#CausalSky
All but one review so far are 5-star.
It's free for everyone!
Share it with your friend!
https://bit.ly/4ic4VK4
#CausalSky
Looks about as simple as we would expect it to be, lots of details to uncover.
Liu et al. Visual-RFT: Visual Reinforcement Fine-Tuning
buff.ly/DbGuYve
(posted a week ago, oops)
Looks about as simple as we would expect it to be, lots of details to uncover.
Liu et al. Visual-RFT: Visual Reinforcement Fine-Tuning
buff.ly/DbGuYve
(posted a week ago, oops)
Plus some extra notes on the custom software I built to support the workshop: simonwillison.net/2025/Mar/8/c...
Plus some extra notes on the custom software I built to support the workshop: simonwillison.net/2025/Mar/8/c...
This week, with the agreement of the publisher, I uploaded the published version on arXiv.
Less typos, more references and additional sections including PAC-Bayes Bernstein.
arxiv.org/abs/2110.11216
This week, with the agreement of the publisher, I uploaded the published version on arXiv.
Less typos, more references and additional sections including PAC-Bayes Bernstein.
arxiv.org/abs/2110.11216
Here's a simple analogy for how so many gains can be made on mostly the same base model:
Here's a simple analogy for how so many gains can be made on mostly the same base model:
Minimal optimal kernel two-sample tests with random Fourier features.
Minimal optimal kernel two-sample tests with random Fourier features.
We have a super line-up of speakers and a call for papers.
This is a chance for your paper to shine at #CVPR2025
⏲️ Submission deadline: 14 March
💻 Page: uncertainty-cv.github.io/2025/
We have a super line-up of speakers and a call for papers.
This is a chance for your paper to shine at #CVPR2025
⏲️ Submission deadline: 14 March
💻 Page: uncertainty-cv.github.io/2025/
www.arxiv.org/abs/2502.19254
www.arxiv.org/abs/2502.19254
At a high level the formulation is straightforward:
At a high level the formulation is straightforward:
"Measuring the Earth...from a vacation photo!"
(correct link this time: youtu.be/038AkmPvltA)
"Measuring the Earth...from a vacation photo!"
(correct link this time: youtu.be/038AkmPvltA)