Adrian Seyboldt
aseyboldt.bsky.social
Adrian Seyboldt
@aseyboldt.bsky.social
And we also experimented with egg a bit: egglog-python.readthedocs.io/latest/expla...
PyTensor Chat — egglog Python documentation
egglog-python.readthedocs.io
June 3, 2025 at 7:55 PM
Do you somewhere have a write-up of how that works on an example? I can't think of a reason we couldn't do the same thing in pymc with pytensor? After all, we also have the model graph in a data structure and can analyze and modify it.
June 3, 2025 at 7:53 PM
Cool stuff, will have to do some reading :-) If you want to add a sampler to this, would be fun to combine it with nuts-rs.
June 3, 2025 at 12:04 PM
pytensor (and with it pymc) will use many of those helpers automatically using rewrites if appropriate, even if you write naive code. It doesn't always catch everything, so it is still good to know about them, but it can help beginners a lot.
May 27, 2025 at 10:52 AM
I found that using zero-sum constrained regression values and then taking the softmax to map that to the simplex usually is very nice to work with.
April 17, 2025 at 10:50 AM
My first instinct about how to model this isn't to use a MvNormal, but maybe to have one scalar variable for the total volume, and then do a regression on the simplex that tells you what ratio of the total volume is in which region?
April 17, 2025 at 10:50 AM
I don't get it, what's so strange about that quoted sentence? A bit pretentious? But if you turn all nouns *and verbs" into "something", how would any sentence survive?
March 10, 2025 at 9:37 PM
You could also do `use std::ops::Neg; num.ln().neg().ln().neg()`, not sure I'd really like to read it that way unless it is in a longer postfix chain anyway...
I sometimes just write `f64::ln(num)` though. Bit verbose with the type all the time, but I don't think it's too bad.
March 4, 2025 at 7:28 PM
Funny, I would not want to go from arviz/xarray (with properly chosen dims and coords) to a dataframe. The only time I do that is if I want to make a plot with seaborn, but that's simply a `values.to_dataframe()` call away...
December 27, 2024 at 1:04 PM
I'm here too :-)
November 20, 2024 at 9:16 AM
I'd also love to be part of the list :-)
November 19, 2024 at 10:11 PM
You can do this easily in pytorch: pytorch.org/docs/stable/...
Also seems to work with onnx (github.com/pymc-devs/nu...)
But for some reason I can't find any references in the jax docs. I'm really confused by this by the way, and maybe I just misunderstand something...
CUDA semantics — PyTorch 2.5 documentation
A guide to torch.cuda, a PyTorch module to run CUDA operations
pytorch.org
November 8, 2024 at 9:21 PM
I don't think you would have to write a kernel. The main problem with nuts on the gpu seems to be that the gpu waits while we check the turning criterion. But we could easily keep the GPU busy during that time with a different chain. And cuda streams are a mechanism for exactly this.
November 8, 2024 at 9:18 PM
Really cool :-)
One thing that has always bugged me in jax is that I can't find a way to use multiple cuda streams. I think at least a part of the NUTS overhead goes away if different chains run in different streams, so that the GPU doesn't have to sit around idle when a different chain could run.
November 8, 2024 at 11:03 AM