Teague Henry
teaguerhenry.bsky.social
Teague Henry
@teaguerhenry.bsky.social
Asst Prof @ UVA Psychology and School of Data Science. Networks, neurons, and complex systems models of psychopathology.
May 29, 2025 at 1:00 PM
"PLEASE read the title"
a man with a beard is saying no in a blurry photo
ALT: a man with a beard is saying no in a blurry photo
media.tenor.com
May 1, 2025 at 5:50 PM
Yes! I tell my class that regression coefficients are the "unique" relationships of a variable. If variables are not "unique", there is less information available to make the SEs "smaller." It's always either multicollinearity or normality assumptions that you've got to disabuse students of!
April 30, 2025 at 2:21 PM
Reposted by Teague Henry
April 29, 2025 at 1:16 PM
Wait, you're telling me that a study in 20 children with ADHD showing an increased correlation between 2 arbitrarily defined brain regions doesn't represent a paradigm shift in how we could treat ADHD with respect to possible targeted pharmaceuticals?
April 17, 2025 at 3:16 PM
I mean, why do journals give due dates on RRs if not to encourage us to submit at the very last second?
April 8, 2025 at 2:59 PM
Homebrew warlock patron "The P-factor" that lets you summon and bind fiends by drawing path diagrams for multi-level SEMs.

At level 17 you gain the ability to measure constructs ... without error!
March 25, 2025 at 12:58 PM
... to use human raters to reconcile any ambiguities. I think this is fascinating work, and a great example of how to use LLMs as a true tool for analyzing data (rather than just asking an LLM to analyze data). Joy is currently working on an extension to discover new topics, which is very exciting!
February 11, 2025 at 9:05 PM
We found that ensembles of small LLMs tended to have reasonable (better than chance) performance at identifying topics. It wasn't perfect, but it is good enough to be a reasonable first pass through a large dataset. The idea is to first identify the cases that are easy to classify, then...
February 11, 2025 at 9:05 PM
We compared the LLM topics to human raters topics. Now, importantly, Joy used an ensemble of LLMs (equivalent to having several human raters). Why? Well, we wanted for this method to be able to run locally, as there are a number of privacy issues with using consumer LLMs to analyze health data.
February 11, 2025 at 9:05 PM
She applied this method to both a massive dataset of Reddit posts from eating disorder related forums, and a smaller dataset of free-text responses from patients with eating disorders (courtesy of @cherilev.bsky.social), and compared how well these LLMs could identify a prespecified set of topics.
February 11, 2025 at 9:05 PM
So, Joy decided to use LLMs to query free-text data to extract "topics." For example, if a study collected responses regarding stressful events, then a topic might be a description like "work stressor" or "interpersonal stressor."
February 11, 2025 at 9:05 PM
Enter LLMs! While I have a number of thoughts on LLMs that I won't get into here, for this project we conceptualized them as "tools that can comprehend free-text," with comprehension being akin to human reading comprehension (do LLMs comprehend? No, but they sure do seem like they do!)
February 11, 2025 at 9:05 PM
Traditionally, this is done via teams of human raters. Researchers will put together groups of raters (usually grad students, possibly RAs) that will go through the free text responses and quantify them in some way. This takes an enormous amount of time and resources.
February 11, 2025 at 9:05 PM
Free-text responses are a wonderful way of collecting nuanced information about any number of psychological phenomena, but the issue is, of course, that they are free-text responses. To perform quantitative analysis on them, you need to convert them into some set of numbers.
February 11, 2025 at 9:05 PM
We originally thought about this project from an EMA perspective, trying to optimize for participant burden. So that’s where we are coming from!
February 8, 2025 at 9:40 PM