David Manuel
dmanuel.bsky.social
David Manuel
@dmanuel.bsky.social
UBC clinical psych grad student

Studying meaning and connectedness as suicide-protective.

Also interested in metascience & inference, tech (AI especially) intersections with psych, and the political economy of ideas.

davidmanuel.substack.com
But it seems like there’s reason to be relatively confident that humans will maintain an advantage when it comes to producing insight — maybe indefinitely — even in a world with superintelligent AI.
February 14, 2025 at 12:24 AM
A huge drop in either training or inference costs (or both) — without a large enough counter-balancing increase in demands — could change how the economics of compute plays out. And maybe I am being insufficiently forward-looking, given what we’ve seen so far.
February 14, 2025 at 12:24 AM
If future models do solve recursion (with massive and selective context windows), the cost of running them might still be prohibitive. Extended coherence and attention over long spans is expensive. We may reserve this for niche, high-impact cases. + Jevon's paradox and whatnot.
February 14, 2025 at 12:24 AM
Or maybe because the RLHF has been done on such short time scales -- selecting for the immediate best response instead of the best response that might appear after a series of back-and-forth messages that produce a dialogue.
February 14, 2025 at 12:24 AM
Maybe it's because the functional context windows (the portion of the windows where they actually perform well) aren't big enough yet. LLMs lack a persistent (and ideally also selective) memory.
February 14, 2025 at 12:24 AM
One proposed explanation: LLMs are insufficiently recursive. They churn forward, line by line, without two steps forward, one step back.

Humans are better at going in noisy loops toward overall greater coherence.

How come?
February 14, 2025 at 12:24 AM
Superintelligence is already here for a bunch of knowledge work -- AI does it faster and roughly as well. If 97% perfection in the output is sufficient, AI is a great option already.

Yet, basically, no fresh insight on its own? As Dwarkesh Patel has been flagging, this is weird.
February 14, 2025 at 12:24 AM
My working definitions of insight vs information, for this piece, drawing on @stephenwolfram.bsky.social concept of the ruliad.
February 14, 2025 at 12:24 AM
For parts about specific d/o, one framing I remember appreciating when I took psychopathology was being taught and then asked about bio + psycho + social for each d/o (with the possibility of categories being empty or more sparse for some d/o).

Could explore cog, aff, beh within psycho too.
December 23, 2024 at 12:44 AM
Yes, totally! I haven’t done an EMA study yet so hadn’t crossed my mind but that makes sense as another slippage point.

Maybe some kind of qualtrics randomizer block that fills in one of several options each time could be useful there?

But would need to be a norm to use something like that.
December 21, 2024 at 10:01 PM
Maybe prolific and other platforms might move towards some kind of model where live video is used during data collection to verify it is a real person, with this video deleted immediately after verification?

Perhaps a pipe dream, but maybe if there were enough demand?

Cost would increase…
December 21, 2024 at 9:20 PM
I agree speed bumps is something! Increasing the effort required for false data is worthwhile.

Just worried about the bumps being too small to effectively deter.

Also I’m worried we may then be able to ignore or downplay the risk of speeding through the existence of the speed bumps.
December 21, 2024 at 9:20 PM
I wonder if live, and pending advances in AI video over the next 2 years, in-person, data collection might have to make a comeback — at least for open-ended responses.

Would be great if alternative solutions emerge though.
December 21, 2024 at 9:20 PM
This definitely seems like a tricky problem for the field. I do worry this solution might lead to a false sense of confidence, though.

Or worse — ability to declare having made efforts and therefore imply the open-ended answers are less likely to be AI-generated.
December 21, 2024 at 4:35 PM
Link to the 1989 paper Meehl co-authored meehl.umn.edu/sites/meehl....

Link to @klonskylab.bsky.social theory paper on understanding vs prediction (in the context of suicide theory, but applicable more broadly I think)
meehl.umn.edu
December 16, 2024 at 5:06 PM
I wonder sometimes about what combo of the these two aspects of prediction (nomothetic understanding vs forecasting) Meehl was getting. Not clear to me yet.
December 16, 2024 at 5:06 PM
My view on this might also be coloured by the understanding vs prediction dialogue that my supervisor @klonskylab.bsky.social has been part of -- trying to delineate between predicting for the sake of understanding vs predicting for the sake of forecasting.
December 16, 2024 at 5:06 PM
More broadly, though, I'm still trying to figure out how much he meant something more like "valid and reliable nomothetic information".
December 16, 2024 at 5:06 PM
I may be misreading him, but I wonder if ML as sometimes (often?) done today with non-interpretable black box aspects doesn't meet the criteria for helping establish empirical relations in the way he meant.

Maybe interpretability advances will change that in the years to come, though, not sure.
December 16, 2024 at 5:06 PM
I've wondered too. I think he might say much of machine learning doesn't meet what he meant by actuarial prediction.

From the 1989 paper: "To be truly actuarial, interpretations must be both automatic (that is, prespecified or routinized) and based on empirically established relations.
December 16, 2024 at 5:06 PM
I’m also super new to this world, though. So don’t know whether my optimism about these things making a dent is totally miscalibrated. But even just having them articulated feels like something I think.
December 14, 2024 at 9:32 PM
And then the stuff from Meehl 1990 and late Meehlism more broadly, to the extent that it can become mainstreamed again — around doing strong and specific NHST where seeing the finding in the absence of the phenomenon working as predicted really would be a “damn strange coincidence”.
December 14, 2024 at 9:32 PM