Sebastian Hellmann
@sehellmann.bsky.social
PostDoc working at TU Munich.
Interested in on computational modelling, decision-making, and confidence.
Cat owner, Ireland lover and brass music fan
Interested in on computational modelling, decision-making, and confidence.
Cat owner, Ireland lover and brass music fan
Reposted by Sebastian Hellmann
Kurzgesagt - in a nutshell explains perfectly why AI in such Context is really problematic.
Also John Oliver from Last week tonight did a really good video about AI Slop and how many think it's real.
Also John Oliver from Last week tonight did a really good video about AI Slop and how many think it's real.
AI Slop Is Destroying The Internet
YouTube video by Kurzgesagt – In a Nutshell
youtu.be
October 27, 2025 at 8:49 PM
Kurzgesagt - in a nutshell explains perfectly why AI in such Context is really problematic.
Also John Oliver from Last week tonight did a really good video about AI Slop and how many think it's real.
Also John Oliver from Last week tonight did a really good video about AI Slop and how many think it's real.
Unfortunately, for the variance on the probability scale, the speed up vanishes.
September 23, 2025 at 3:39 PM
Unfortunately, for the variance on the probability scale, the speed up vanishes.
Thanks for the kind words.
Glad to see that people already account for this in some packages. Not sure whether it helps a lot, but using the direct computations instead of using numerical integration may still speed up things a bit (about 20 times on my machine)
Glad to see that people already account for this in some packages. Not sure whether it helps a lot, but using the direct computations instead of using numerical integration may still speed up things a bit (about 20 times on my machine)
September 23, 2025 at 3:39 PM
Thanks for the kind words.
Glad to see that people already account for this in some packages. Not sure whether it helps a lot, but using the direct computations instead of using numerical integration may still speed up things a bit (about 20 times on my machine)
Glad to see that people already account for this in some packages. Not sure whether it helps a lot, but using the direct computations instead of using numerical integration may still speed up things a bit (about 20 times on my machine)
The good news: We provide a simple, correct computation that accounts for this variability, ensuring accurate group-level inferences. This fix is crucial for reliable conclusions in all cognitive models with constrained parameters.
Check out the details to improve your hierarchical analyses!
Check out the details to improve your hierarchical analyses!
September 10, 2025 at 2:41 PM
The good news: We provide a simple, correct computation that accounts for this variability, ensuring accurate group-level inferences. This fix is crucial for reliable conclusions in all cognitive models with constrained parameters.
Check out the details to improve your hierarchical analyses!
Check out the details to improve your hierarchical analyses!
(e.g. the standard normal CDF) and normal distributions to fit the group-level distribution.
But we cannot simply apply the same transformation to the mean of the real-valued normal distribution to derive the group-level mean on the parameter scale! This ignores individual variability.
But we cannot simply apply the same transformation to the mean of the real-valued normal distribution to derive the group-level mean on the parameter scale! This ignores individual variability.
September 10, 2025 at 2:41 PM
(e.g. the standard normal CDF) and normal distributions to fit the group-level distribution.
But we cannot simply apply the same transformation to the mean of the real-valued normal distribution to derive the group-level mean on the parameter scale! This ignores individual variability.
But we cannot simply apply the same transformation to the mean of the real-valued normal distribution to derive the group-level mean on the parameter scale! This ignores individual variability.
If you’re estimating group-level means of constraint parameters, which are fitted with nonlinear transformations, beware that a common approach can produce biased estimates—especially with high individual variability.
For constraint parameters, we often use nonlinear transformations...
For constraint parameters, we often use nonlinear transformations...
September 10, 2025 at 2:41 PM
If you’re estimating group-level means of constraint parameters, which are fitted with nonlinear transformations, beware that a common approach can produce biased estimates—especially with high individual variability.
For constraint parameters, we often use nonlinear transformations...
For constraint parameters, we often use nonlinear transformations...