Nate TeBlunthuis
groceryheist.cc
Nate TeBlunthuis
@groceryheist.cc
https://tebluntuhis.cc

Computational social scientist with a foot in HCI. Main social media is @groceryheist@social.coop

Assistant Professor at the University of Texas at Austin in the School of Information.
I found that they did! The graphic below depics how most of the longest lasting episodes of ecological interaction between subreddits were mutualistic.
June 28, 2025 at 5:50 PM
In that work used time series models to infer networks of competition and mutualism between overlapping online communities. This work found evidence that they tended to be mutualistic. For example, the diagram below shows a network of mental health subreddits that is dense with mutualism.
June 28, 2025 at 5:50 PM
Often, several different online communities exist where similar people talk about similar things. This is really easy to observe browsing Reddit or Facebook groups. For example The visualization of clustered subreddits with overlapping users blow shows different subreddits related to cycling.
June 28, 2025 at 5:50 PM
Got a cool zine in the mail today. Free download here: lovelesspress.itch.io/a-web-worth-...
June 10, 2025 at 11:48 PM
Bluesky now has over 10 million users, and I was #43,945!
September 17, 2024 at 4:54 AM
Thrilled to announce my appointment as Assistant Professor of Social Informatics at the @UTiSchool
. I'm so thrilled to join this intellectual community :D.
I'm recruiting PhD students interested in online communities and AI/ML in social science, broadly construed. Hook 'em!
July 17, 2024 at 6:14 PM
This app, significantly, is the only competitor having a blue logo reminiscent of Twitter.
July 30, 2023 at 6:25 AM
Computational social scientists using machine classifiers build trust in evidence by reporting *predictive performance*. Metrics like F1 or AUC for this are important, but our results show that we can do better. We can use validation data to correct misclassification bias!
July 14, 2023 at 3:42 AM
Similarly, when a classifier predicts the DV and makes errors that are correlated with an IV, naïve estimates of that IV can be badly biased. In this case, only our MLA method was able to recover the true value.
July 14, 2023 at 3:41 AM
For instance, as this figure shows, when a (not very accurate and moderately biased) classifier predicts an IV and makes errors that are correlated with the DV, a naïve (uncorrected) method gets the sign wrong and is very confident about it!
July 14, 2023 at 3:40 AM
We use monte-carlo simulations to test methods proposed by social scientists, but none work in all the above scenarios (details in the paper). Therefore, we propose *maximum likelihood adjustment* (MLA), tailored from a framework drawn from biostats, which does!
July 14, 2023 at 3:40 AM
Finally, to show (3), we test methods that use (small amounts of) validation data to correct misclassification bias. An ideal method works with independent or dependent variables (IV or DV) and with errors that are *random* or *systematic* (correlated with modeled variables).
July 14, 2023 at 3:39 AM
We show (1) using Perspective API, a toxicity classifier widely used to study social media, and the human-labeled civil comments dataset. Perspective is very accurate and only modestly biased, but it still causes sign-flips (type I or type II errors) in a realistic study design.
July 14, 2023 at 3:38 AM
Automated classifiers are never perfect. They make errors and often manifest biases related to social categories. Such errors cause *misclassification bias*, threatening the validity of statistical findings!
(can we fix it.jpg)
July 14, 2023 at 3:37 AM