Thomas Crow
tcrow.bsky.social
Thomas Crow
@tcrow.bsky.social
Clinical Psychologist. Integrative clinically. Lover of #rstats. Quantcurious.
This was super random and hilarious 🤣
July 16, 2025 at 8:18 PM
I suppose, but I'm not sure how it makes sense in the context of small N unless you're not concerned with identifying any single interaction. So it seems like if you get e.g., an exciting non cats cradle finding, it would be hard to trust it in small sample context just like other small N findings.
June 22, 2025 at 7:16 PM
Can you give an example of "representing high dimensional interactions with lower sample sizes"?
June 22, 2025 at 3:32 AM
Not saying it's not a common message or a tempting one, but it can't be the right one in the next election.
May 24, 2025 at 8:23 PM
"Those people are dumb, our people are smart" is an obtuse, terrible message that dispenses with empathy for the other side and further alienates many of the people we might hope to convince that Pete is a great candidate!
May 24, 2025 at 8:22 PM
As a result, many psychological scientists are also engaging in lots of wasteful analyses, publishing findings that are very interesting on the face of it but virtually guaranteed not to replicate 🫠
March 15, 2025 at 7:11 PM
Are there power issues when it comes to including both random slopes and intercepts? And am I crazy or is there some reason that glmmTMB has been more likely to fit random slopes models without convergence errors vs lme4?
February 26, 2025 at 12:19 AM
Maybe there's a typo but I'm seeing the word "tongue" here... Do tell. 🤔
December 9, 2024 at 2:17 PM
I like to think I'm operating on a slightly more complex basis than reinforcement learning 🫠
December 7, 2024 at 4:00 PM
For the naive among us: why use Fortran in your R code? I assume it's for the performance? My understanding is Fortran is super old - didn't realize it was used in R packages in the way that e.g. C code is.
October 26, 2024 at 2:17 PM
Why not just grab some pseudoephedrine from behind the counter? On the other hand if you're stuffed up and have good self-control, limited frequency oxymetazoline is simply incredible.
October 20, 2024 at 8:56 PM
Incredibly cool work - great to see it in press!
October 17, 2024 at 1:45 PM
How long have the delays been an issue? I'm hearing recently that there have been big delays, which was sad to hear, given my very positive experiences in Germany many years ago.
October 12, 2024 at 7:52 PM
Sjmisc and sjPlot used to be big for me, but now I've mostly transitioned to easystats. (Shoutout @strengejacke.bsky.social ). Marginaleffects package has been great. Also, lavaan. lme4. psych. Increasingly tidymodels. I'm in psychology.
October 5, 2024 at 11:31 PM
This is so common and what makes it extra confusing is how trivially easy it is to claim your business on Google Maps 🤔
July 6, 2024 at 11:56 AM
It is indeed true...Michael Clark has a running blog where he checks this every so often, and a new paper (or perhaps preprint) that I saw on Twitter just today concludes the same thing. xgboost ftw. (Not only is gradient boosting better w tabular data, but of course its also less resource costly.
July 3, 2024 at 11:24 AM
I agree with your instinct to add random slopes but not sure how well those fit with just two timepoints? Also, in his blog Sebastian has a nice discussion of the differences between the `~ tx + time + tx:time` model vs the one that doesn't keep tx as a separate IV (in the 2-timepoint context).
December 7, 2023 at 6:25 PM
If you don't care about change through T3, then can do: `df |> filter(time %in% c(1, 2)) |> lmer(y ~ time + time:tx + (1 | id), data = _)` if you want the MLM variant of the ANCOVA as detailed nicely by S Kurz's blog. Of course just cut the filter() call to keep all timepoints.
Just use multilevel models for your pre/post RCT data | A. Solomon Kurz
I've been thinking a lot about how to analyze pre/post control group designs, lately. Happily, others have thought a lot about this topic, too. The goal of this post is to introduce the change-score a...
solomonkurz.netlify.app
December 7, 2023 at 6:22 PM