Subbarao Kambhampati (కంభంపాటి సుబ్బారావు)
rao2z.bsky.social
Subbarao Kambhampati (కంభంపాటి సుబ్బారావు)
@rao2z.bsky.social
AI researcher & teacher at SCAI, ASU. Former President of AAAI & Chair of AAAS Sec T. Here to tweach #AI. YouTube Ch: http://bit.ly/38twrAV Twitter: rao2z
[On using Continuous Latent Space Vectors in the context windows of Transformers and LLMs] #SundayHarangue
👉 x.com/rao2z/status...
November 3, 2025 at 3:16 PM
𝐏𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐯𝐞 𝐓𝐡𝐢𝐧𝐤𝐢𝐧𝐠? The anthropomorphization of LRM intermediate tokens as thinking begat a cottage industry to "get efficiency by shortening thinking." We ask: 𝗜𝘀 𝗖𝗼𝗧 𝗹𝗲𝗻𝗴𝘁𝗵 𝗿𝗲𝗮𝗹𝗹𝘆 𝗮 𝗿𝗲𝗳𝗹𝗲𝗰𝘁𝗶𝗼𝗻 𝗼𝗳 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 𝗵𝗮𝗿𝗱𝗻𝗲𝘀𝘀 𝗼𝗿 𝗶𝘀 𝗶𝘁 𝗺𝗼𝗿𝗲 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝘃𝗲? 👉 www.linkedin.com/posts/subbar...
September 10, 2025 at 4:50 PM
Computational Complexity is the wrong measure for LRMs (as it was for LLMs)--think distributional distance instead #SundayHarangue (yes, we're back!)

👉 x.com/rao2z/status...
July 13, 2025 at 9:42 PM
The lectures start with a "big picture" overview (Lecture 1); focus on standard LLMs and their limitations, and LLM-Modulo as a test-time scaling approach (Lecture 2); and end with a critical appraisal of the test-time scaling and RL post-training techniques (Lecture 3). 2/
June 19, 2025 at 10:27 PM
Anthropomorphization of intermediate tokens as reasoning/thinking traces isn't quite a harmless fad, and may be pushing LRM research into questionable directions.. So we decided to put together a more complete argument. Paper 👉 arxiv.org/pdf/2504.09762 (Twitter thread: x.com/rao2z/status...)
May 28, 2025 at 1:41 PM
This RLiNo? paper (arxiv.org/abs/2505.13697) lead by Soumya Samineni and Durgesh_kalwar dives into the MDP model used in the RL post-training methods inspired by DeepSeek R1, and asks if some of the idiosyncrasies of RL aren't just consequences of the simplistic structural assumptions made
May 25, 2025 at 10:51 PM
Do Intermediate Tokens Produced by LRMs (need to) have any semantics? Our new study 👇

Thread 👉 x.com/rao2z/status...
May 21, 2025 at 8:08 PM
Delighted to share that Siddhant Bhambri & Mudit Verma's
critical evaluation and refutation of the reasoning claims of ReACT has been accepted to #TMLR (Transactions on Machine Learning)

👉https://openreview.net/forum?id=aFAMPSmNHR
May 13, 2025 at 5:22 PM
It ain't "The Bitter Lesson" if you are in the loop curating the training data for your LLM, y'all.. Pick your lesson, will ya? #SundayHarangue (h/t @kstechly.bsky.social)
May 5, 2025 at 11:44 AM
Our invited commentary for the Annals of
New York Academy of Sciences titled "(How) Do reasoning models reason?" is now online

👉 nyaspubs.onlinelibrary.wiley.com/doi/epdf/10....

It is a written version of my recent talks (and #SundayHarangues) on the recent developments in LRMs..
April 13, 2025 at 5:37 PM
Woo hoo.. Our first #TMLR paper!🤗 On the planning and scheduling abilities of LRMs o1 & R1 (w/ Karthik, Kaya, Atharva)

👉 openreview.net/forum?id=FkK...

Even a jaded researcher like me has to admit that
Transactions on Machine Learning Research is a veritable oasis among #AI publication venues! 🙏
April 9, 2025 at 2:29 PM
Test-time-scaling, Post-training and Distillation are just compiling the verifier signal into the LLM at different phases #SundayHarangue

See 👉 x.com/rao2z/status...

Or 👉 www.linkedin.com/posts/subbar...
March 16, 2025 at 10:10 PM
Pushing for "human-sounding" traces that have no semantic standing engenders false (undeserved) confidence for the end users. If end accuracy is all you care for, there is no obvious reason to stick to "human-sounding" traces--Let RL be RL--and learn its own prompt augmentation language!
March 10, 2025 at 2:01 PM
Intermediate tokens being dubbed as "Reasoning Traces" is the new anthropomorphization fashion.. See this video #SundayHarangue 👉https://youtube.com/watch?v=CQ5JS3v61Ns&list=PLNONVE5W8PCRbf3WmbcqgXPToJuA2NUfP&t=3787s that wonders whether LRMs should instead be called LMMs--Large Mumbling Models..
March 10, 2025 at 2:00 PM
RL is great; but RL envy in LLMs may not be.. (or R1's SFT vs. RL is more like Batch vs. SGD)#SundayHarangue (Special Turing edition 😋) There has been a tendency in the LLM literature to dress up simplistic ideas in RL garb to gain additional respectability... 👉
x.com/rao2z/status...
March 7, 2025 at 11:01 PM
Thank you AAAI & EAAI for this honor..🙏 (my talk at #AAAI2025 will be on Saturday 2pm..)
February 27, 2025 at 4:36 PM
No, Virginia, those Rorschach musings by LRMs may not be "reasoning traces" in any meaningful way.. #SundayHarangue

I have been saying for sometime now that the intermediate tokens/"mumblings" that LRMs tell themselves are not to be seen as "reasoning traces" 👉 x.com/rao2z/status...
February 17, 2025 at 2:55 PM
On the MDP formulation of LLMs used in R1 [Not quite #SundayHarangue]

You know DeepSeek R1 uses RL--but did you grok the strange MDP formulation it uses for this? We had a fun 3hr discussion about this in our group meeting last Friday; here is a summary

👉 x.com/rao2z/status...
February 10, 2025 at 12:51 AM
Spoke with @nitasha.bsky.social of @washingtonpost.com about reasoning models like DeepSeek R1 for her article, which accurately reflects my view about the Rorschach nature of the so called "reasoning traces" 👉https://www.washingtonpost.com/technology/2025/02/08/deepseek-ai-chatbot-reasoning-china/
February 8, 2025 at 6:02 PM
For anyone who saw my MLST interview from December and wanted to make sense of Deepseek R1 from that prompt augmentation perspective, here is an R1 delta addendum you might enjoy: 👉 x.com/rao2z/status...
February 3, 2025 at 5:23 PM
In case you interested, performance of Deepseek's R1 model on PlanBench (thanks to Karthik Valmeekam)

Explanatory thread here: x.com/rao2z/status...
January 21, 2025 at 8:37 PM
On the seedy optics of "Building an AGI Moat by Corralling Benchmark Creators" #SundayHarangue

[Thoughts on OpenAI/Frontier Math benchmark story]

x.com/rao2z/status...
January 19, 2025 at 9:53 PM
2025 IS GOING TO BE A HUGE YEAR FOR AI AGENTS!!!

Russell & Norvig recently published this great textbook on AGENTS!!

𝘏𝘦𝘳𝘦 𝘪𝘴 𝘸𝘩𝘢𝘵 𝘪𝘴 𝘪𝘯𝘤𝘭𝘶𝘥𝘦𝘥

AI Complete!

𝘞𝘩𝘦𝘳𝘦 𝘤𝘢𝘯 𝘺𝘰𝘶 𝘭𝘦𝘢𝘳𝘯 𝘢𝘣𝘰𝘶𝘵 𝘪𝘵 𝘢𝘭𝘭?

Your neighborhood Intro #AI course (eg. rakaposhi.eas.asu.edu/cse471 )

See also 👉 x.com/rao2z/status...
January 8, 2025 at 11:18 AM
Happy New Year from Kailash.. 😍 #Ellora
January 1, 2025 at 1:23 PM
A meta list of all my 2024 #SundayHarangue's

Not quite sure why, but I apparently wrote sixteen long #AI related Sunday Harangues in 2024.. 😅.

Most were first posted on twitter.

👉https://x.com/rao2z/status/1873214567091966189
December 29, 2024 at 12:50 PM