Luis Welbanks
luiswel.bsky.social
Luis Welbanks
@luiswel.bsky.social
51 Pegasi b & Presidential Postdoctoral Fellow studying the atmospheres of planets outside our Solar System at @SESEASU -> Assistant Professor @SESEASU 2025

Previously @NASA Sagan Fellow. @Gates_Cambridge
scholar at @Cambridge_Uni.
What are your thoughts?
July 20, 2025 at 1:45 AM
I think the full title was an unfortunate Ceres of events btw, but I am no Lemoony Snicket expert
June 13, 2025 at 12:38 AM
And sanctioned by the journal :P
June 12, 2025 at 6:00 PM
Call them out ;)
June 3, 2025 at 9:47 PM
Great show. 10/10
May 28, 2025 at 6:01 PM
Reposted by Luis Welbanks
I agree with you that saying K2-18 b “can’t” have an ocean or “isn’t” an ocean world is a stretch - we can’t totally rule it out with the present data, but it does appear that Neptune-like or gas dwarf models are consistent with what we know about the planet and require much less fine-tuning
May 25, 2025 at 5:29 PM
"Relies on the obfuscation of how, exactly, they are defining [...] in order to garner press coverage. "

&
"An exceedingly generous observer might chalk up this divergence to the perennial conflict between scientists and their PR machines"

Great title: [Fill in the blank] Can’t Have It Both Ways
May 25, 2025 at 6:58 AM
What do you mean by original hypothesis? Just to make sure I understand correctly, are you suggesting that the discourse (online/in papers?) should say that the original hypothesis (which?) is not ruled out but neither is Wogan's or X?
Not being facetious, legitimately asking.
May 23, 2025 at 7:47 PM
In the low SNR, it is possible for the inferred properties to be shaped as much by preconceived notions for what a planet ought to be like, as by data (if we are not careful).
May 23, 2025 at 6:07 PM
Our take-home on this specific point could be: "Reliance on
Bayesian evidence alone, coupled with exploration
of only a narrow part of the model space, has led
to contradictory interpretations."
May 23, 2025 at 6:07 PM
On that we say "Conversely, when all candidate models
adequately fit a spectrum, a preference for one model
over another does not rule out the worse-performing
model."
May 23, 2025 at 6:07 PM
2) At the same time - you could ask: can the data rule out X hypothesis? The answer may be no, and that's ok too!

e.g., K2-18b MIRI - current data cannot rule out the scenario of a planet under radiative-convective-photochemical equilibrium (section Self-Consistent Models)
May 23, 2025 at 6:07 PM
We say in our paper "When all considered candidate models are poor representations of reality, the best-performing model is simply the least inadequate and may not necessarily lead to reliable interpretations of the data."
May 23, 2025 at 6:07 PM
Not 100% sure I understand your question(s) but let me try.

1) Constraining a parameter does not equal 'detecting' that parameter. I can add a none-sense parameter and get a tight constrain. This is the whole point of the cheese v. sponge example.
May 23, 2025 at 6:07 PM
There is no "Bayesian police" to say what to compare or not. Any paper can compare any two models (but please contextualize!).
However, if we are going to argue about 'standard practices', the "consensus" (somewhat arbitrary) is to compare relative to the full hypothesis 2/2
Thanks for engaging!
May 23, 2025 at 5:56 PM
Sounds good! Two distinct yet complementary points.
In general Welbanks & Nixon+ is not arguing against model comparisons but it is making an appeal to contextualize them. What two models did you compare? From that perspective Chubb+20 is doing that in the abstract: X sigma relative to blah. 1/2
May 23, 2025 at 5:56 PM
🤩🤩🤩🤩
May 23, 2025 at 5:41 AM
Say hi to Ian and Jasmina! If you haven't done so, visit the Sheikh Zayed grand mosque. I was blown away by the beautiful interiors.
May 22, 2025 at 10:19 PM
But how certain are we that we can ignore the uncertainties?

The scientist's paradox
May 22, 2025 at 10:09 PM
@viciykevin.bsky.social one could argue whether the "full hypothesis space" is valid or not - we discuss that in our paper. However, the comparison is performed against this full hypothesis space. This is how the comparison got its connection to 'detection'.
May 22, 2025 at 3:14 PM
What @distantworlds.space said. Section 4 of Gasman says "In Tables 6–10 we specify the Bayes
factor, B01, for each retrieval set-up, comparing the retrieval
with the specified molecule (C2H2, C2H4, CH4) included versus not included" - you want to compare against your full hypothesis space.
May 22, 2025 at 3:10 PM