Nathaniel Forde
nathanielforde.bsky.social
Nathaniel Forde
@nathanielforde.bsky.social
https://nathanielf.github.io/

Statistics, Probability previously Logic and Philosophy
... licensing a causal interpretation, but like other methods it is only as defensible as the assumptions. With respect to hierarchies I guess it depends on what the group hierarchy is.... for e.g. gender the differences in B coefficients would seem to me to reads as a modification from baseline
November 17, 2025 at 9:29 PM
B coefficients is valid. So kind of like an IV design, it's valid under limited conditions... instead of instrument strength we're focused on the measurement model avoiding ommited variable bias. Bollen calls the conditional independence pseudo isolation. SEMs provide an architecture for ...
November 17, 2025 at 9:29 PM
I think the continuity between SEMs and more general DAG methods is there. I generally think of SEMs as a limited DAG with a focus on using the abstraction layer of a CFA to ensure the appropriate conditional independences hold, so that conditional on the model the causal interpretation of the ...
November 17, 2025 at 9:29 PM
...that the exploration be done regardless. When focused on a causal claim, your sensitivity analysis should be a kind of refutation study. Under what reasonable variation of model spec will my finding fail?
November 17, 2025 at 9:16 PM
If you're smart your pre-registration has some "redudnant" measures trying to get at the core constructs. Maybe you ask 5 questions on job satisfaction and the data suggests 3 indicators work, but 5 fail in combination. The space for exploration is different but sensitivity analysis mandates...
November 17, 2025 at 9:16 PM
... revising your theory altogether. Change the direction of arrows. Report poor identification for initial focal parameter. As long as you're transparent about the learning I'm not too worried about revising your theory. On a more modest scale priors or residual covariances can be tweaked.
November 17, 2025 at 9:16 PM
...can seem larger, than the iterative space within a single study. But the potential exists at both levels of granularity. The Bayesian workflow emphasises the learning in the moment, especially as your theoretical model runs up against awkward data. The model fit statistics might suggest ...
November 17, 2025 at 9:16 PM
So I think that's interesting but maybe conflates two levels of granularity. There should be an iterative process for science writ large, which learns what you can from any dataset, and then expands the study for the next data collection exercise. The freedom to hypothesise and alter approach...
November 17, 2025 at 9:16 PM
@flavourdave.bsky.social this write up has the hierarchical SEM example we discussed a while back...
November 16, 2025 at 9:57 AM
The talk ends with a reflection on the nature of craft in statistical modelling, making an an analogy with the practice of writing. Refine and Iteration are key for justifiable and robust conclusions.
November 16, 2025 at 9:09 AM
You can imagine how the latent relationships changes between these states shift as we change the configuration of the agent, and SEMs provide a nice quantified lens on what the implications are for these shifts! How shifted latent states impact downstream choice outcomes!
November 16, 2025 at 9:09 AM
This isn’t only useful for prediction.
It’s increasingly important diagnostically, as we try to understand differences in affect, preference, and function across a growing variety of artificial agents. The post describes how we can impose a hierarchicals structure over the SEM relationships.
November 16, 2025 at 9:09 AM
In a Bayesian workflow, this becomes even more effective.
Iterative refinement helps ensure the latent structure is supported by both theory and data — giving us a statistical characterisation of relationships between an agent’s latent states.
November 16, 2025 at 9:09 AM
A key idea:
SEMs provide a principled way to abstract over noisy indicators (survey items, behavioural logs, choices, chat responses…) and infer latent constructs representing how an agent perceives or evaluates the world.
November 16, 2025 at 9:09 AM
My talk focused on craft in statistical modelling with Bayesian workflows, using a case study on job satisfaction and what makes work feel compelling.

@pymc.io case study → www.pymc.io/projects/exa...

Slides → nathanielf.github.io/talks/pycon_...
Bayesian Workflow with SEMs
This case study extends the themes of contemporary Bayesian workflow and Structural Equation Modelling. While both topics are well represented in the PyMC examples library, our goal here is to show...
www.pymc.io
November 16, 2025 at 9:09 AM
The conference was energising — kicked off by @inesmontani.bsky.social , with great talks from Pietro Mascolo and others. Also surreal (in a good way) to be back on the UCD campus after so many years.
November 16, 2025 at 9:09 AM
Outside of tech too
October 25, 2025 at 10:28 PM
Coincidentally, I'm working on a general write-up on the value of such joint modelling at the moment! I think these models illuminate something about the Bayesian approach to causal inference more generally!
October 22, 2025 at 3:17 PM
Fun example! Is the data available online?
October 19, 2025 at 9:14 AM