Richard C. Suwandi
banner
richardcsuwandi.bsky.social
Richard C. Suwandi
@richardcsuwandi.bsky.social
PhD-ing at CUHK-Shenzhen. Building evolutionary coding agents at Dria. #AI4Science community leader at alphaXiv

richardcsuwandi.github.io
We believe CAKE is just a slice of a bigger future where models evolve continuously alongside the problems they solve 🧬

Looking forward to presenting this work in San Diego this December!

📄 Paper: alphaxiv.org/abs/2509.179...
💻 Code: github.com/richardcsuwa...
alphaxiv.org
September 27, 2025 at 2:30 PM
Beyond BO, CAKE is a universal framework for adaptive kernel design that can be easily extended to any other kernel-based methods, including:

👉 Support vector machines
👉 Kernel PCA
👉 Metric learning

Wherever kernels encode assumptions, CAKE can help them learn from context!
September 27, 2025 at 2:30 PM
Our analysis also revealed that LLM-guided evolution consistently improve population fitness, significantly outperforming random recombination or traditional genetic algorithms
September 27, 2025 at 2:30 PM
CAKE also excelled in the multi-objective setting:

- Achieved highest overall score and hypervolume for photonic chip design
- Demonstrated tenfold speedup in finding high-quality solutions
September 27, 2025 at 2:30 PM
On 60 HPOBench tasks, CAKE demonstrated superior performance:

- Consistently achieved highest average test accuracy across all ML models
- Showed rapid early progress, achieving 67.5% of total improvement within 25% of the budget
September 27, 2025 at 2:30 PM
1️⃣ How well the kernel explains the observed data (as measured by model fit)
2️⃣ How promising the kernel’s proposed next query point is (as measured by acquisition value)
September 27, 2025 at 2:30 PM
🤔 If we have a pool of kernels, which kernel should guide the next query?

We propose BIC-Acquisition Kernel Ranking (BAKER) 👨‍🍳 to select the best kernel at each step by jointly optimizing two criteria:
September 27, 2025 at 2:30 PM
CAKE works via an evolutionary process:

1️⃣ Initialize a population of base kernels
2️⃣ Score each kernel using a fitness function
3️⃣ Evolve kernels via LLM-driven crossover and mutation to generate new candidates
4️⃣ Select top-performing kernels for the next generation
September 27, 2025 at 2:30 PM
Rather than committing to a fixed kernel, CAKE uses LLMs as intelligent genetic operators to dynamically evolve the kernel as more data is observed during the optimization process
September 27, 2025 at 2:30 PM
🤔 How do we design kernels that adapt to the observed data, especially when evaluations are expensive?

Our solution: Context-Aware Kernel Evolution (CAKE) 🍰
September 27, 2025 at 2:30 PM
🤔 How do we design kernels that adapt to the observed data, especially when evaluations are expensive?

Our solution: Context-Aware Kernel Evolution (CAKE) 🍰
September 27, 2025 at 2:30 PM
The efficiency of BO depends critically on the choice of the GP kernel, which encodes structural assumptions of the underlying objective

⚠️ A poor kernel choice can lead to biased exploration, slow convergence, and suboptimal solutions!
September 27, 2025 at 2:30 PM
This shortcut works—until we need breakthroughs. From robotics to drug discovery to aligning LLMs, real progress demands intelligent exploration.

I wrote a blog on why we need to re-center exploration in AI 👇
richardcsuwandi.github.io/blog/2025/ex...
The Science of Intelligent Exploration | Richard Cornelius Suwandi
Why we need to re-center exploration in AI
richardcsuwandi.github.io
July 23, 2025 at 7:17 PM
I wrote a blog post diving into the world of open-ended AI, exploring how embracing open-endedness might help us break the limits of today’s AI systems 👇

richardcsuwandi.github.io/blog/2025/op...
The future of AI is open-ended | Richard Cornelius Suwandi
Embracing open-endedness in the pursuit of creative AI
richardcsuwandi.github.io
June 27, 2025 at 4:15 PM
From inventing new musical genres to imagining life beyond our universe, we continuously push the boundaries of what’s possible.

What if AI could be as endlessly creative as humans or even nature itself?
June 27, 2025 at 4:15 PM
They found that if an AI agent can tackle complex, long-horizon tasks, it must have learned an internal world model—and we can even extract it just by observing the agent's behavior.

I wrote a blog post unpacking this groundbreaking paper and what it means for the future of AGI 👇
No world model, no general AI | Richard Cornelius Suwandi
From Ilya's prediction to Google DeepMind's proof.
richardcsuwandi.github.io
June 11, 2025 at 5:31 PM
But what if AI could learn and improve its own capabilities without human intervention? I wrote a blog post to explore this concept further and examine what it could mean for the future of AI👇

richardcsuwandi.github.io/blog/2025/dgm/
AI that can improve itself | Richard Cornelius Suwandi
A deep dive into self-improving AI and the Darwin-Gödel Machine.
richardcsuwandi.github.io
June 3, 2025 at 4:59 PM
This is the Achilles heel of modern AI — like a car, no matter how well the engine is tuned and how skilled the driver is, it cannot change its body structure or engine type to adapt to a new track on its own.
June 3, 2025 at 4:59 PM