Subbarao Kambhampati (కంభంపాటి సుబ్బారావు)
rao2z.bsky.social
Subbarao Kambhampati (కంభంపాటి సుబ్బారావు)
@rao2z.bsky.social
AI researcher & teacher at SCAI, ASU. Former President of AAAI & Chair of AAAS Sec T. Here to tweach #AI. YouTube Ch: http://bit.ly/38twrAV Twitter: rao2z
Pinned
A meta list of all my 2024 #SundayHarangue's

Not quite sure why, but I apparently wrote sixteen long #AI related Sunday Harangues in 2024.. 😅.

Most were first posted on twitter.

👉https://x.com/rao2z/status/1873214567091966189
[On using Continuous Latent Space Vectors in the context windows of Transformers and LLMs] #SundayHarangue
👉 x.com/rao2z/status...
November 3, 2025 at 3:16 PM
My talk at Samsung AI Forum yesterday
www.youtube.com/watch?v=L2nA...
LRMs and Agentic AI (Talk at Samsung AI Forum)
YouTube video by Subbarao Kambhampati
www.youtube.com
September 16, 2025 at 5:39 PM
In the year since LRMs ("reasoning models") hit the scene, we have been trying to understand, analyze and demystify them.. Here are our efforts to date--conveniently all in one place..👇

www.linkedin.com/posts/subbar...
In the year since LRMs ("reasoning models") hit the scene, we have been trying to understand, analyze and demystify them.. Here are our efforts to date--conveniently all in one… | Subbarao K...
In the year since LRMs ("reasoning models") hit the scene, we have been trying to understand, analyze and demystify them.. Here are our efforts to date--conveniently all in one place.. (𝗙𝗶𝗿𝘀𝘁..) 𝗘𝘃𝗮𝗹...
www.linkedin.com
September 14, 2025 at 10:00 PM
𝐏𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐯𝐞 𝐓𝐡𝐢𝐧𝐤𝐢𝐧𝐠? The anthropomorphization of LRM intermediate tokens as thinking begat a cottage industry to "get efficiency by shortening thinking." We ask: 𝗜𝘀 𝗖𝗼𝗧 𝗹𝗲𝗻𝗴𝘁𝗵 𝗿𝗲𝗮𝗹𝗹𝘆 𝗮 𝗿𝗲𝗳𝗹𝗲𝗰𝘁𝗶𝗼𝗻 𝗼𝗳 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 𝗵𝗮𝗿𝗱𝗻𝗲𝘀𝘀 𝗼𝗿 𝗶𝘀 𝗶𝘁 𝗺𝗼𝗿𝗲 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝘃𝗲? 👉 www.linkedin.com/posts/subbar...
September 10, 2025 at 4:50 PM
Both LLMs and LRMs are upper bounded by humanity's knowledge closure. True scientific discoveries are, by definition, outside of that closure. Ergo, LLMs/LRMs are great force multipliers to us; but don't support "Nobel this weekend" hype..

👉 www.linkedin.com/posts/subbar...
Neither LLMs nor LRMs have the ability to go beyond the humanity's knowledge closure--which is needed for true discoveries. | Subbarao Kambhampati
Neither LLMs nor LRMs have the ability to go beyond the humanity's knowledge closure--which is needed for true discoveries. Both are beholden to the collected knowledge of the humanity (whether de...
www.linkedin.com
July 19, 2025 at 10:18 PM
Computational Complexity is the wrong measure for LRMs (as it was for LLMs)--think distributional distance instead #SundayHarangue (yes, we're back!)

👉 x.com/rao2z/status...
July 13, 2025 at 9:42 PM
A̶̶̶I̶̶̶ ̶ ̶ ̶ ̶(̶A̶r̶t̶i̶f̶i̶c̶i̶a̶l̶ ̶I̶n̶t̶e̶l̶l̶i̶g̶e̶n̶c̶e̶)̶
̶̶̶A̶̶̶G̶̶̶I̶̶̶ ̶(̶A̶r̶t̶i̶f̶i̶c̶i̶a̶l̶ ̶G̶e̶n̶e̶r̶a̶l̶ ̶I̶n̶t̶e̶l̶l̶i̶g̶e̶n̶c̶e̶)̶
̶̶̶A̶̶̶S̶̶̶I̶̶̶ ̶(̶A̶r̶t̶i̶f̶i̶c̶i̶a̶l̶ ̶S̶u̶p̶e̶r̶ ̶I̶n̶t̶e̶l̶l̶i̶g̶e̶n̶c̶e̶)
ASDI (Artificial Super Duper Intelligence)

Don't get stuck with yesterday's hypeonyms!
Dare to get to the next level!

#AIAphorisms
June 23, 2025 at 10:36 PM
For anyone interested, here are the videos of the three ~50min each lectures on the reasoning/planning capabilities of LLMs/LRMs that I gave at #ACDL2025 in Riva Del Sole resort last week. 1/

www.youtube.com/playlist?lis...
ACDL Summer School Lectures on Planning/Reasoning Abilities of LLMs/LRMs - YouTube
www.youtube.com
June 19, 2025 at 10:27 PM
Reposted by Subbarao Kambhampati (కంభంపాటి సుబ్బారావు)
...it basically confirmed what is already well-established: LLMs (& LRMs & "LLM agents") have trouble w/ problems that require many steps of reasoning/planning.

See, e.g., lots of recent papers by Subbarao Kambhampati's group at ASU. (2/2)
June 9, 2025 at 10:53 PM
An AGI-wannabe reasoning model whining that it couldn't handle a problem because its context window isn't big enough is like a superman-wannabe little kid protesting that he couldn't add those numbers because he doesn't have enough fingers and toes.. #AIAphorisms
June 16, 2025 at 12:47 AM
Reposted by Subbarao Kambhampati (కంభంపాటి సుబ్బారావు)
"our counter-intuitive results demonstrate ways in which common interpretations of Large Reasoning Models may be anthropomorphizations or simplifications" arxiv.org/abs/2505.13775
Beyond Semantics: The Unreasonable Effectiveness of Reasonless Intermediate Tokens
Recent impressive results from large reasoning models have been interpreted as a triumph of Chain of Thought (CoT), and especially of the process of training on CoTs sampled from base LLMs in order to...
arxiv.org
June 1, 2025 at 1:30 PM
The transformer expressiveness results are often a bit of a red herring as there tends to be a huge gap between what can be expressed in transformers, and what can be learned with gradient descent. Mind the Gap, a new paper with
Lucas Saldyt dives deeper into this issue 👇👇

x.com/SaldytLucas/...
Lucas Saldyt on X: "Neural networks can express more than they learn, creating expressivity-trainability gaps. Our paper, “Mind The Gap,” shows neural networks best learn parallel algorithms, and analyzes gaps in faithfulness and effectiveness. @rao2z https://t.co/8YjxPkXFu0" / X
Neural networks can express more than they learn, creating expressivity-trainability gaps. Our paper, “Mind The Gap,” shows neural networks best learn parallel algorithms, and analyzes gaps in faithfulness and effectiveness. @rao2z https://t.co/8YjxPkXFu0
x.com
May 30, 2025 at 1:59 PM
Anthropomorphization of intermediate tokens as reasoning/thinking traces isn't quite a harmless fad, and may be pushing LRM research into questionable directions.. So we decided to put together a more complete argument. Paper 👉 arxiv.org/pdf/2504.09762 (Twitter thread: x.com/rao2z/status...)
May 28, 2025 at 1:41 PM
This RLiNo? paper (arxiv.org/abs/2505.13697) lead by Soumya Samineni and Durgesh_kalwar dives into the MDP model used in the RL post-training methods inspired by DeepSeek R1, and asks if some of the idiosyncrasies of RL aren't just consequences of the simplistic structural assumptions made
May 25, 2025 at 10:51 PM
Do Intermediate Tokens Produced by LRMs (need to) have any semantics? Our new study 👇

Thread 👉 x.com/rao2z/status...
May 21, 2025 at 8:08 PM
Delighted to share that Siddhant Bhambri & Mudit Verma's
critical evaluation and refutation of the reasoning claims of ReACT has been accepted to #TMLR (Transactions on Machine Learning)

👉https://openreview.net/forum?id=aFAMPSmNHR
May 13, 2025 at 5:22 PM
IMHO, the whole idea of connecting "length of intermediate tokens" produced by LRMs to inference time compute is a mind-boggling demonstration of circular reasoning--that comes from the assumptions about MDP model and reward model.. 👇

x.com/rao2z/status...
x.com
May 9, 2025 at 2:42 PM
It ain't "The Bitter Lesson" if you are in the loop curating the training data for your LLM, y'all.. Pick your lesson, will ya? #SundayHarangue (h/t @kstechly.bsky.social)
May 5, 2025 at 11:44 AM
Reposted by Subbarao Kambhampati (కంభంపాటి సుబ్బారావు)
Don't use summarizers for the papers by @rao2z.bsky.social because the reasoning traces therein are, unlike the LRMs & LLMs under investigation, substantively meaningful, semantically well-ordered, and stylistically compelling and engaging!
#AI #LLMs #CoT
arxiv.org/abs/2504.09762
(How) Do reasoning models reason?
We will provide a broad unifying perspective on the recent breed of Large Reasoning Models (LRMs) such as OpenAI o1 and DeepSeek R1, including their promise, sources of power, misconceptions and limit...
arxiv.org
April 19, 2025 at 7:23 PM
Here is a recording of my talk at @msftresearch.bsky.social last week titled "(How) Do LLMs Reason/Plan?" (Also gave a version of it at as a distinguished lecture at Oracle today..)

www.youtube.com/watch?v=0u2h...
(How) Do LLMs Reason/Plan? (Talk given at Microsoft Research; 4/11/25)
YouTube video by Subbarao Kambhampati
www.youtube.com
April 16, 2025 at 12:38 AM
A preprint available at arxiv.org/abs/2504.09762
Our invited commentary for the Annals of
New York Academy of Sciences titled "(How) Do reasoning models reason?" is now online

👉 nyaspubs.onlinelibrary.wiley.com/doi/epdf/10....

It is a written version of my recent talks (and #SundayHarangues) on the recent developments in LRMs..
April 15, 2025 at 5:23 PM
Our invited commentary for the Annals of
New York Academy of Sciences titled "(How) Do reasoning models reason?" is now online

👉 nyaspubs.onlinelibrary.wiley.com/doi/epdf/10....

It is a written version of my recent talks (and #SundayHarangues) on the recent developments in LRMs..
April 13, 2025 at 5:37 PM