Chandler Squires
chandlersquires.bsky.social
Chandler Squires
@chandlersquires.bsky.social
CMU postdoc, previously MIT PhD. Causality, pragmatism, representation learning, and AI for biology / science more broadly. Proud rat dad.
I'm in Lausanne for CLeaR! www.cclear.cc/2025

Looking forward to an excellent program, seeing old friends, and making some new ones - feel free to message me if you'll be here.
CLeaR
www.cclear.cc
May 5, 2025 at 9:46 AM
If I was a high school senior deciding between undergrad programs, or an undergraduate senior deciding between graduate programs, the recent conduct of places like Harvard would weigh heavily in their favor. Long term gain for short term pain.
April 16, 2025 at 3:52 AM
Reposted by Chandler Squires
A massive, self-inflicted wound on American higher education.
February 21, 2025 at 11:19 PM
Some thoughts on interpolation vs. extrapolation:

I have a soft spot for the word “extrapolation” in the context of machine learning, using it as a broad term to capture ideas like compositional generalization and various forms of distributional robustness.

But it can be a major linguistic crutch.
February 2, 2025 at 10:00 PM
I haven’t read the paper, but based on the authors this paper would be high on my reading list if I were diving into fairness evaluation. At the meta-level, I strongly vibe with the sense in which “necessary reading” is being used here, if I interpret it correctly
So happy to have this paper placed at NeurIPS 2024. If I may (without humility) say, I think that it is necessary reading for anyone who uses (or intends to use) fairness evaluations for applied ML.
Excited to share our new work on causal sensitivity analysis for fairness metrics at #NeurIPS2024! We've developed a causal sensitivity analysis framework to understand how underlying measurement biases (encoded by DAGs) impact machine learning fairness evaluations. 1 / 5
December 13, 2024 at 5:34 AM
I'm on the program committee for NeuS 2025: neus-2025.github.io. I highly recommend checking it out!

NeuS (Neurosymbolic Systems) is a cutting-edge new conference at the intersection of machine learning, programming languages, control theory, and more - I'm excited to see how it takes shape!
NeuS 2025
neus-2025.github.io
December 11, 2024 at 8:46 PM
In stats/CS, minimax/worst-case results can give an overly pessimistic view of problem complexity. In stats and related areas, adaptive/instance-dependent results can give a more refined and potentially optimistic view of sample complexity.

What's the best analogue in CS?
November 26, 2024 at 10:23 PM
Reposted by Chandler Squires
We're hiring two interns and a full time post in the Valence labs London office. If you're interested in causal inference / generative models / latent variable inference and how to use these ideas to impact real world drug discovery programs, consider applying: job-boards.greenhouse.io/valencelabs
November 22, 2024 at 1:15 PM