Aidan Sirbu
sirbu.bsky.social
Aidan Sirbu
@sirbu.bsky.social
MSc Student @ Mila and McGill

ML & NeuroAI research
5) Finally, I don't use it for writing as much as my peers. But its quite nice asking it how to say things like "this is best exemplified by..." in a different way so I don't repeat the same thing a million times in my paper or scholarship application.
September 4, 2025 at 3:57 PM
4) Insofar as idea synthesis, I find my conversations with LLMs about as useful as talking to a layman with decent critical thinking skills and access to google. Its nice to bounce ideas off it at 1am when my colleagues may be asleep. But conversing with other academics is still by far more useful.
September 4, 2025 at 3:54 PM
3) I am very grateful I learned how to code before the advent of LLMs. I think there's a real concern of new students relying too heavily on LLMs for coding, foregoing the learning process. At least for the time being, in order to use LLMs effectively, one still needs a strong foundation in coding.
September 4, 2025 at 3:51 PM
2) When I start coding new projects, getting copilot to draft up the initial commit saves me loads of time. Of course there will be quite a few errors and/or silly coding structure. But I find tracing through the logic and making necessary corrections to be quicker than starting from scratch
September 4, 2025 at 3:48 PM
I'm a graduate student breaking into the field of ML from physics.

1) I find LLMs useful insofar as gaining a basic understanding of new concepts. However going past a basic understanding still requires delving deep into the literature. I find the back-and-forth tutor style conversations useful.
September 4, 2025 at 3:40 PM
See my inner physicist hates the whole "doesn't matter as long as it works" sentiment in the ML community 😂. I want to UNDERSTAND not just accept... jokes aside though I see your point for the purposes of this discussion. I think we've identified a lot of potential in this stream of inquiry 🧐
November 22, 2024 at 9:43 PM
That's somewhat along the lines of what I was thinking as well :)

Also good point about o1. I'd be very interested to see how it performs on the ToM tests!
November 22, 2024 at 9:31 PM
Give the results and discussion a read as well it's super interesting! There's reason to believe perfect performance of Llama on the faux pas test was illusory (expanded upon in the discussion). That bias you mention is also elaborated upon in the discussion (and I briefly summarize above).
November 22, 2024 at 9:30 PM
This all now begs the question of whether this makes LLMs more or less competent as practitioners of therapy. I think good arguments could be made for both perspectives. 🧵/fin
November 22, 2024 at 8:37 PM
This fact is of course unsurprising (as the authors admit) since humanity's embodiment has placed evolutionary pressure on resolving these uncertainties (i.e. to fight or to flee). This dis-embodiment of LLMs could prevent their commitment to the most likely explanation. 🧵/2
November 22, 2024 at 8:36 PM
I stand corrected. However, LLM's failure at the faux pas test underscores the need for further discussion. The failure: "not comput[ing] [mentalistic-like] inferences spontaneously to reduce uncertainty". LLMs are good at emulating human-responses, but the underlying cognition is different. 🧵/1
November 22, 2024 at 8:35 PM
I'd argue that until LLMs can implement theory of mind, they'd be much better at diagnostic-oriented therapy. Being able to truly understand a human, form hypotheses, and guide a patient towards resolution is very different from recommending treatment based off a checklist made using the DSM.
November 22, 2024 at 3:26 PM
Mind if I wiggle my way into this 🐛
November 20, 2024 at 4:16 PM