kanjun
kanjun.bsky.social
kanjun
@kanjun.bsky.social
Empowering humans in an age of AI. CEO, Imbue. Interested in agency, interfaces, & experimental social processes
If you're open to new tools, we made Sculptor (imbue.com/sculptor) for this workflow — it runs Claude agents in containers + rsyncs files/git state from their container to your local repo ("Pairing Mode"), so you don't have to make a repo copy or deal with worktrees.

(Also, hello after ~10 years!)
October 8, 2025 at 5:58 PM
Write code in your editor, or use Sculptor agents to generate new code.

Run your custom checks automatically on any new code, so you can detect and fix the issues you care about as you work.
April 11, 2025 at 1:37 AM
Define custom checks with LLM prompts, like "ensure error messages are informative” — or with commands like pytest, pylint, ruff, etc. — to flag issues according to your own specific preferences.

Run code to fix issues until all your checks pass.
April 11, 2025 at 1:37 AM
Kick off as many fixes as you want, all in parallel. Sculptor runs your code in sandboxes, so you can test safely.

When you apply a fix, the code is synced to your local editor.

Sculptor works with all editors, whether @neovim.io / @emacs or Cursor / Windsurf.
April 11, 2025 at 1:37 AM
When you connect Sculptor to your codebase, it checks for issues like missing tests, hardcoded variables, race conditions, etc.

Launch agents to fix the issues, and see the diff before applying.
April 11, 2025 at 1:37 AM
We made a coding agent environment that helps you catch issues, write tests, and improve your code, all while working in your favorite editor :)

Hello world, Sculptor! imbue.com/product/scul...

(also interesting how I feel inclined to post a totally different non-marketing message on bsky vs X)
April 11, 2025 at 1:37 AM
"Undo" is a much better user experience than "Are you sure you want to <X>?" — noticed when comparing friction of Superhuman vs. Todoist.
February 18, 2025 at 3:38 AM
I'm excited to share our $200M Series B at a $1B+ valuation to develop AI systems that reason!

We believe reasoning is the main blocker to effective AI agents. We train large models tailor-made for reasoning on our ~10K GPU cluster. On top, we prototype agents we use every day.
February 18, 2025 at 3:44 AM
We need AI policy that protects people over profit—yet after analyzing the ~1,450 submissions to @NTIAgov's AI RFC, we found many of today’s AI policy proposals don't match the harms they're trying to prevent.

Here's what key players are saying, and a framework for AI policy. 🧵
February 18, 2025 at 3:44 AM
John Atanasoff invented the first fully electronic computer, got drafted in WWII to supervise testing of mines, and never returned to computing.

It’s surprisingly easy to get distracted. I wonder what might’ve happened had he doggedly decided to focus back on computers?
February 18, 2025 at 3:44 AM
Had a very fun time discussing our collective AI future with @reidhoffman :) thanks @Figma for having me at #Config2023!

Recorded here:
https://www.youtube.com/watch?v=fS5Sqw_Ba8U&list=PLXDU_eVOJTx61IdqXh3jrvopJN8HGkS5F
February 18, 2025 at 3:49 AM
It’s remarkable how “not useful” Shannon’s discovery mapping relays to Boolean logic must’ve seemed at the time. It must’ve felt like a toy—“how nice, relays can do math”—because it wasn’t clear yet why being able to do binary arithmetic should matter.
February 18, 2025 at 3:44 AM
First time at PyCon weekend, and it was fun to share our open source tools + a demo of what’s coming! Thanks @pycon for choosing us as one of the 7 startups featured 🥳😎 https://pycon.blogspot.com/2023/04/welcoming-7-companies-to-startup-row-at.html
February 18, 2025 at 3:54 AM
ChatGPT can decidedly already be used to write school essays. Just had it compare Kegan's vs. Maslow's framework and it emitted an undergrad-level essay. I suspect we'll need to increase our expectations of student essay quality, or find other ways to teach critical thinking.
February 18, 2025 at 3:59 AM
In Avalon, embodied agents solve the types of problems our ancestors solved.

We include 20 procedurally generated tasks, but it's easy to add your own. Everything, including the game engine, is open source.

The action space maps to a VR headset, so any VR task can be added.
February 18, 2025 at 4:15 AM
Avalon is a Featured Paper at NeurIPS!

We built Avalon so that academic researchers could make progress on general intelligence even without access to huge compute clusters.

Stop by our poster tomorrow 11am to learn more, or read on 👇
February 18, 2025 at 4:04 AM
We're lucky to be supported by backers who believe in long-term robustness & safety rather than short-term commercialization.

They've contributed over $20 million, and committed more than $100 million of future funding in a combination of options & technical milestones.
February 18, 2025 at 4:40 AM
Our ultimate goal: to engineer AI agents that can learn & understand like humans, so they can be safely deployed in the real world.

To do this, we'll extend Avalon over the next year to an encyclopedia of tasks that are easy for humans, but hard for machines.
February 18, 2025 at 4:35 AM
Avalon is the world's fastest 3D simulator for RL agents. All baselines train on 1 GPU in ~1 day.

We want academic researchers to be able to study aspects of intelligence missing from today’s models, even w/o access to large-scale compute.

Get started:...
February 18, 2025 at 4:25 AM
We also wondered: why does nearly every new funding institution set out to "do things differently", and then, when observed 10 years later, seem to have snapped back to the same programs & decision processes as the NIH and NSF? What drives this pattern, and how do we break it?
February 18, 2025 at 4:30 AM
An early question was: why do funders say they want high-risk, high-reward research, yet end up funding low-risk, run-of-the-mill work? Does high-reward research even need to be high-risk? Or could we construct processes to change the risk profile to enable low-risk, high-reward?
February 18, 2025 at 4:25 AM
Contrary to popular opinion, generating large swaths of functions can be rough—these models are trained to imitate average (often bad!) programmers, and we've seen candidates miss generated bugs during interviews. Can you spot the bug in this Copilot-generated code?
February 18, 2025 at 4:35 AM
It always shocks me to remember that we didn’t really have street lights until the 1800s - only 200 years ago! And until then, and for long after, being out at night in cities was quite unsafe. To someone living in 1850, what electricity would enable was unimaginable.
February 18, 2025 at 4:25 AM
An observation on field founding via Carnot’s original paper: it’s interesting that despite being totally wrong about the underlying mechanism of heat transfer (caloric theory vs stat mech), he derived what became the 2nd law of thermodynamics & PV=nRT via thought experiments.
February 18, 2025 at 4:30 AM
Holy cow, Fourier and Lazare Carnot (thermodynamics Carnot’s father) were two of Napoleon’s generals! We think today’s distractions are bad…I can’t imagine trying to do mathematics while leading an army into battle.
February 18, 2025 at 4:30 AM