Zachary Huang
zacharyhuang.bsky.social
Zachary Huang
@zacharyhuang.bsky.social
Researcher @MSFTResearch AI Frontiers. LLM Agents and Systems. | PhD @ColumbiaCompSci | Prev: @GraySystemsLab @databricks| Fellowship: @GoogleAI | New YouTuber
for those who wanna the full dopamine blast on neural nets: youtu.be/SXnHqFGLNxA
Give Me 40 min, I'll Make Neural Network Click Forever
YouTube video by Zachary Huang
youtu.be
September 7, 2025 at 6:39 PM
I upgraded to the 100 bucks tier and it worked well
August 21, 2025 at 1:57 AM
IMO fixing agent design is much easier than funding unlimited tokens. Claude is already nerfing its limits, while Google has deep pockets and cash to burn.
August 3, 2025 at 2:46 AM
Then there's the Gemini CLI. It has (b) covered (Gemini 2.5 Pro has a massive 1M-token context vs. Claude's 200K, plus top-notch coding skills), but it fails hard on (a) with its lousy agent design, leaving the entire Gemini CLI experience underwhelming.
August 3, 2025 at 2:46 AM
Claude Code nails both (a) and (b)—it dumps your entire codebase into its context without hesitation, letting powerful models shine on long inputs. However, they have recently started nerfing (b) by implementing tighter token limits.
August 3, 2025 at 2:46 AM
Cursor nails (a) but skimps on (b)—it throttles tokens hard, reading tiny chunks instead of dumping the full codebase as Claude does. Top-tier LLM models handle long contexts fine, but Cursor apparently can't afford it.
August 3, 2025 at 2:46 AM
For coding agents, two things matter most: (a) Solid agent design (b) Generous token usage for a top-tier LLM
Everything else (codebase indexing, specialized models that applies edits, IDE UI/UX for human approvals) from cursor is just incremental.
August 3, 2025 at 2:46 AM