Ino.Ichi
banner
inoichan.bsky.social
Ino.Ichi
@inoichan.bsky.social
Research Engineer at Sakana AI / PhD Pharm. Sci. at Kyoto Univ. / Kaggle Grandmaster
Reposted by Ino.Ichi
New Job Posting!

We’re looking to hire experienced Software Engineers to join our R&D team. You will help productionize our advanced AI-driven discovery platform and our model-development efforts.

sakana.ai/careers/#sof...

Japanese language fluency is not required for this role.
November 29, 2025 at 3:09 AM
Reposted by Ino.Ichi
Announcing our Series B 🐟

sakana.ai/series-b
November 16, 2025 at 11:59 PM
Reposted by Ino.Ichi
Excited to announce Sakana AI’s Series B! 🐟
sakana.ai/series-b

From day one, Sakana AI has done things differently. Our research has always focused on developing efficient AI technology sustainably, driven by the belief that resource constraints—not limitless compute—are key to true innovation.
November 17, 2025 at 12:03 AM
Reposted by Ino.Ichi
Coverage of Darwin Gödel Machine and The AI Scientist in MIT Technology Review article. @technologyreview.com
www.technologyreview.com/2025/08/06/1...
Five ways that AI is learning to improve itself
From coding to hardware, LLMs are speeding up research progress in artificial intelligence. It could be the most important trend in AI today.
www.technologyreview.com
August 9, 2025 at 2:37 AM
Reposted by Ino.Ichi
フロンティアAIモデルを「混ぜて使う」──
「試行錯誤」と「集合知」で新たな推論時スケーリングへ

ブログ: sakana.ai/ab-mcts-jp/
論文: arxiv.org/abs/2503.04412

このたびSakana AIは新アルゴリズム「AB-MCTS」を開発し、ARC-AGI-2ベンチマークで有望な結果を得ました。
July 1, 2025 at 4:39 AM
Reposted by Ino.Ichi
AIにもっと“試行錯誤”と“集合知”を─Sakana AIが開発する新アルゴリズム

wired.jp/article/saka...

フロンティアモデルと呼ばれるAIを単体ではなく“混ぜて”使えば、個々のモデル─ChatGPT、Gemini、DeepSeek─を使うよりも大幅に上回る成績を出すことが可能だと、日本発AIスタートアップのSakana AIが発表した。
AIにもっと“試行錯誤”と“集合知”を──Sakana AIが開発する新アルゴリズム
フロンティアモデルと呼ばれるAIを単体ではなく“混ぜて”使えば、個々のモデル──ChatGPT、Gemini、DeepSeek──を使うよりも大幅に上回る成績を出すことが可能だと、日本発AIスタートアップのSakana AIが発表した。
wired.jp
July 1, 2025 at 8:40 AM
Reposted by Ino.Ichi
AIも「3人寄れば文殊の知恵」、Sakana AIが新しい推論手法を開発

3人集まれば1人よりも優れた知恵が出るということわざ「3人寄れば文殊の知恵」が、AIにも当てはまった格好だ。 🐡🐟🐠
xtech.nikkei.com/atcl/nxt/new...
AIも「3人寄れば文殊の知恵」、Sakana AIが新しい推論手法を開発
Sakana AIは2025年7月1日、複数の大規模言語モデル(LLM)が推論時に連携することで、単体のLLMでは解くのが困難な問題を解くアルゴリズム「Multi-LLM AB-MCTS(Adaptive Branching Monte Carlo Tree Search)」を開発したと発表した。
xtech.nikkei.com
July 2, 2025 at 5:54 AM
Reposted by Ino.Ichi
Wider or Deeper? Scaling LLM Inference-Time Compute with Adaptive Branching Tree Search

arxiv.org/abs/2503.04412
July 3, 2025 at 12:41 AM
Reposted by Ino.Ichi
Sakana AIではApplied Teamの立ち上げを急速に進めており、優秀なApplied Research Engineerを引き続き募集しています🚀

sakana.ai/careers/#app...

正社員だけでなく学生インターンシップも歓迎です✨

金融・保険などのエンタープライズ分野から政府・防衛などの公共分野での業務に興味のある方
最先端のAI技術を実社会に導入してインパクトを出したい方
雇用期間や勤務スタイルの相談もできますのでぜひご応募ください!
July 3, 2025 at 4:10 AM
Just published a blog post on our new LLM answer search method: "Multi-LLM AB-MCTS”🚀 It's designed to flexibly explore how to search and which LLM to use for any given problem. We've also open-sourced the implementation and experiments. Check it out! 🙌
We’re excited to introduce AB-MCTS!

Our new inference-time scaling algorithm enables collective intelligence for AI by allowing multiple frontier models (like Gemini 2.5 Pro, o4-mini, DeepSeek-R1-0528) to cooperate.

Blog: sakana.ai/ab-mcts
Paper: arxiv.org/abs/2503.04412
July 1, 2025 at 3:16 AM
Reposted by Ino.Ichi
We’re excited to introduce AB-MCTS!

Our new inference-time scaling algorithm enables collective intelligence for AI by allowing multiple frontier models (like Gemini 2.5 Pro, o4-mini, DeepSeek-R1-0528) to cooperate.

Blog: sakana.ai/ab-mcts
Paper: arxiv.org/abs/2503.04412
July 1, 2025 at 1:18 AM
Reposted by Ino.Ichi
Inspired by the power of human collective intelligence, where great achievements arise from the collaboration of diverse minds, we believe the same principle applies to AI. Individual models possess unique strengths and biases, which we view as valuable resources for collective problem-solving.
July 1, 2025 at 1:20 AM
Reposted by Ino.Ichi
AB-MCTS (Adaptive Branching Monte Carlo Tree Search) harnesses these individualities, allowing multiple models to cooperate and engage in effective trial-and-error, solving challenging problems for any single AI.
July 1, 2025 at 1:21 AM
Reposted by Ino.Ichi
Our initial results on the ARC-AGI-2 benchmark are promising, with AB-MCTS combining o4-mini + Gemini-2.5-Pro + R1-0528, current frontier AI models, significantly outperforming individual models by a substantial margin.
July 1, 2025 at 1:21 AM
Reposted by Ino.Ichi
This research builds on our 2024 work on evolutionary model merge, shifting focus from “mixing to create” to “mixing to use” existing, powerful AIs.

At Sakana AI, we remain committed to pioneering novel AI systems by applying nature-inspired principles such as evolution and collective intelligence.
July 1, 2025 at 1:22 AM
Reposted by Ino.Ichi
We believe this work represents a step toward a future where AI systems collaboratively tackle complex challenges, much like a team of human experts, unlocking new problem-solving capabilities and moving beyond single-model limitations.

Algorithm (TreeQuest): github.com/SakanaAI/tre...
GitHub - SakanaAI/treequest: A Tree Search Library with Flexible API for LLM Inference-Time Scaling
A Tree Search Library with Flexible API for LLM Inference-Time Scaling - SakanaAI/treequest
github.com
July 1, 2025 at 1:23 AM
Reposted by Ino.Ichi
Inference-Time Scaling and Collective Intelligence for Frontier AI

sakana.ai/ab-mcts/

We developed AB-MCTS, a new inference-time scaling algorithm that enables multiple frontier AI models to cooperate, achieving promising initial results on the ARC-AGI-2 benchmark.
July 1, 2025 at 2:21 AM
Reposted by Ino.Ichi
「時間を使って考える」AIの新パラダイム :Continuous Thought Machine(CTM)を提案 🧠

日本語ブログ : sakana.ai/ctm-jp
インタラクティブレポート : pub.sakana.ai/ctm

Sakana AIは、時間情報を明示的に扱う新しいAIモデル「Continuous Thought Machine(CTM)」を発表しました 。
May 12, 2025 at 6:27 AM
Reposted by Ino.Ichi
“Continuous Thought Machines”

Blog → sakana.ai/ctm

Modern AI is powerful, but it's still distinct from human-like flexible intelligence. We believe neural timing is key. Our Continuous Thought Machine is built from the ground up to use neural dynamics as a powerful representation for intelligence.
May 12, 2025 at 2:33 AM
Reposted by Ino.Ichi
The Continuous Thought Machine (CTM) incorporates neuron-level temporal processing and neural synchronization, moving beyond current AI limitations.

Interactive Paper (with web-demo): pub.sakana.ai/ctm/
Full Paper: arxiv.org/abs/2505.05522
GitHub Project: github.com/SakanaAI/con...
May 12, 2025 at 2:36 AM
Reposted by Ino.Ichi
Japan’s Sakana AI sees opportunity with US uncertainty

“While there are many AI companies in the US and China, Japanese firms have had little global presence. We believe there’s a demand—particularly among government agencies—for domestically developed AI solutions.”
asia.nikkei.com/Business/Tec...
Japan's Sakana AI sees opportunity with US uncertainty
Startup expects more demand for homegrown defense-related technologies
asia.nikkei.com
May 1, 2025 at 2:20 AM
Reposted by Ino.Ichi
1/ Transformer-Squared: Self-adaptive LLMs

Paper: openreview.net/forum?id=dh4...

Transformer-Squared adapts its weights on the fly for each query, achieving strong performance across tasks and enabling parameter-efficient life-long learning.
April 21, 2025 at 9:51 AM
Reposted by Ino.Ichi
2/ Agent Skill Acquisition for Large Language Models via CycleQD

Paper: openreview.net/forum?id=Kvd...

CycleQD is an ecological niche-inspired model-merging approach that achieves great performance on computer science tasks while retaining language capabilities.
April 21, 2025 at 9:52 AM
Reposted by Ino.Ichi
3/ An Evolved Universal Transformer Memory

Paper: openreview.net/forum?id=s1k...

Neural Attention Memory Models (NAMMs) is an evolved memory system trained to improve performance and efficiency on language transformers, and zero-shot transferring to vision and RL foundation models.
April 21, 2025 at 9:53 AM
Reposted by Ino.Ichi
4/ TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models

Paper: openreview.net/forum?id=cqs...

TAID is a novel knowledge distillation method that uses a time-dependent intermediate distribution addressing common challenges in distilling LLMs.
April 21, 2025 at 9:54 AM