Subbarao Kambhampati (కంభంపాటి సుబ్బారావు)
rao2z.bsky.social
Subbarao Kambhampati (కంభంపాటి సుబ్బారావు)
@rao2z.bsky.social
AI researcher & teacher at SCAI, ASU. Former President of AAAI & Chair of AAAS Sec T. Here to tweach #AI. YouTube Ch: http://bit.ly/38twrAV Twitter: rao2z
This series of lectures was given the same week there was all that brouhaha over the Apple illusion paper (I was giving these lectures during the day and talking to reporters in the evening 😅). As such they are pretty up-to-date! 3/

x.com/rao2z/status...
Subbarao Kambhampati (కంభంపాటి సుబ్బారావు) on X: "Some of what that recent Apple LRM limitations paper shows is known (pardon my friendly Schmidhubering; I do welcome more LLM studies with scientific skepticism). Our study 👇 from Sep 2024 shows o1 accuracy degrading as complexity increases.. 1/ https://t.co/d8zEUGi4SZ" / X
Some of what that recent Apple LRM limitations paper shows is known (pardon my friendly Schmidhubering; I do welcome more LLM studies with scientific skepticism). Our study 👇 from Sep 2024 shows o1 accuracy degrading as complexity increases.. 1/ https://t.co/d8zEUGi4SZ
x.com
June 19, 2025 at 10:27 PM
The lectures start with a "big picture" overview (Lecture 1); focus on standard LLMs and their limitations, and LLM-Modulo as a test-time scaling approach (Lecture 2); and end with a critical appraisal of the test-time scaling and RL post-training techniques (Lecture 3). 2/
June 19, 2025 at 10:27 PM
Reposted by Subbarao Kambhampati (కంభంపాటి సుబ్బారావు)
...it basically confirmed what is already well-established: LLMs (& LRMs & "LLM agents") have trouble w/ problems that require many steps of reasoning/planning.

See, e.g., lots of recent papers by Subbarao Kambhampati's group at ASU. (2/2)
June 9, 2025 at 10:53 PM