Karoline Huth
karolinehuth.bsky.social
Karoline Huth
@karolinehuth.bsky.social
Researcher @ University of Amsterdam
(Applied) Statistics | Bayesian | Networks | R software | Data Science | Climate Change
I don't mind the statement, rather the parallels to the 2017 paper; our paper makes a different point contradicting your paper. Thus, my reference to the "non-replicability".

Our paper was meant as a scientific evaluation of the evidence in highly parameterized models
October 19, 2025 at 3:47 PM
One thought on networks as "methodological dead ends": for me, networks are just one of many methods useful for some RQ that psychologists have. I agree they've been overused for the wrong questions and often over-interpreted—but it's not the network’s fault researchers used them wrongly.
October 15, 2025 at 8:58 AM
I assume networks are in great company with other highly parameterized models in psychology like SEM models for the amount of uncertainty that is present in the findings.

Mostly in psychology we have (had) too little data for the large models we estimate (e.g., SEM, network).
October 15, 2025 at 8:58 AM
Appreciate you picking up our work. I share many critiques of networks, but I don’t think our paper supports your argument. We show most edges are inconclusive.

Non-replication is conclusive evidence for an edge, it is in network A but not B. Inconclusive edges can’t establish replication (for me)
October 15, 2025 at 8:58 AM
Such a great app and tool! What is your reasoning in still showing edges between two nodes even if one indicates that one doesn't think there is a connection? Can i indicate that a link is for sure not there?
March 3, 2025 at 10:14 AM
that would require the papers to have a testable research question 🙊

also, happy to give you access to our documents to assess your guess
January 25, 2025 at 10:28 AM
I can see that par cor are more prone to differences because you condition on a set of variables (and if that set of variables differs between two samples the par cor can also differ). zero-order cor and par cor have the same amount of parameters and as such I would expect the same robustness
January 25, 2025 at 10:22 AM
Interesting thought. For me, robustness of findings is a necessary condition to determine (non-)replication.

1) robustness (in this paper): sufficient support from data that my findings hold.
2) non-replication: there is sufficient evidence in sample A and B, in A it is present and absent in B
January 25, 2025 at 10:22 AM
and yes i am also super curious about the uncertainty underlying reported individual-level networks 🤓
January 24, 2025 at 3:07 PM
"[...], if an edge is present in one sample, but not in another, and we have inconclusive evidence in at least one of the samples, this does not mean that there is a contradiction [...]" (p9) We simply have insufficient information in at least one sample. With more data both edges may be present
January 24, 2025 at 3:07 PM
Thanks for the kind thread Miri! To clarify the last point: We fully agree with you that there are/were concerns in the robustness of the network literature. The difference (as I see it) is that we attribute it to insufficient information (data), rather than an inherent property of the networks.
January 24, 2025 at 3:07 PM
Thankful for... 🙏
...all the researchers providing access and input to their data
...the dedicated assistants and colleagues that helped with data collection and cleaning
...everyone providing helpful input and calming words during the extensive project 🙏🧡 /end
January 24, 2025 at 11:02 AM
Applied researcher interested in understanding your phenomenon from a network perspective? Use our website to get an insight into previous studies for potential meta-networks or insights into the nodes/questionnaires commonly included.
January 24, 2025 at 11:02 AM
All results are available in an accompanying open-access website uvasobe.shinyapps.io/ReBayesed/

Methodologist interested in methodology development? Use our resource of aggregated statistics for realistic simulation conditions (i.e., network density and expected edge weights).
January 24, 2025 at 11:02 AM
What to do with...
...past network studies: Interpret their findings which caution and ideally aggregated them as a meta-network
...future network studies: Conduct a Bayesian analysis of your network, so you are at least aware of how (un)certain your results are. See how to doi.org/10.1177/2515...
Sage Journals: Discover world-class research
Subscription and open access journals from Sage, the world's leading independent academic publisher.
doi.org
January 24, 2025 at 11:02 AM
Our results do not imply a criticism of network models in general, but rather point out the inherent uncertainty underlying highly-parameterized models estimated on the common insufficient sample sizes.
January 24, 2025 at 11:02 AM
This does not mean that most network results are flawed but rather that most network findings are reported with more confidence than is warranted from the data.
Many network results are overstated of which some may be incorrect (not hold upon further data).
January 24, 2025 at 11:02 AM
80% of all edges in the analyzed networks lack sufficient data support to confirm their presence or absence. One-third show inconclusive evidence (BF < 3), half show weak evidence (BF 3–10), and fewer than 20% show compelling evidence (BF > 10).
January 24, 2025 at 11:02 AM