Tomasz Stefaniak
tomasz-fm.bsky.social
Tomasz Stefaniak
@tomasz-fm.bsky.social
Lately: AI things.

Also:
http://rankanything.online - rank things
http://youtube.be/@EggPlusRadish - travel channel
http://stefaniak.cc - more links
If ChatGPT Atlas, or another AI browser, takes off we'll have websites that are built to be AI-friendly, or AI-first.

Imagine YouTube, Canva, Revolut, or Amazon but designed with both the human's and the AI's ease of use in mind.

What will the web look like?
October 26, 2025 at 2:54 PM
Any iOS / XCode experts here that can help me figure out why I'm not seeing the "StoreKit Configuration" option when editing app scheme?

Screenshot 1: what I see
Screenshot 2 & 3: what RevenueCat and Apple docs claim I should be seeing

🙏
October 24, 2025 at 12:08 AM
And that's it - that's how Continue Autocomplete is made!

Keep reading some Github links to our codebase where you can dig into this in more detail 🧑‍💻
November 30, 2024 at 5:22 PM
Sometimes, the model might take too long to respond.

Based on our data, if the user doesn’t receive a completion within the first 300ms they’re unlikely to accept it.

That’s why, if the stream is taking too long, we display whatever we have generated so far and fetch the rest in the background.
November 30, 2024 at 5:19 PM
We have a filter for each one of the major problems.

Some of them fix the issues, some of them have the authority to end the stream and return an empty response instead.
November 30, 2024 at 5:19 PM
The final prompts might looks like this. They contain a header with snippets and a prefix and suffix separated by a FIM marker:
November 30, 2024 at 5:17 PM
Continue allows you to plug in a variety of LLMs and we need to generate the right template for each one of them.

We have some custom, model-specific templates for the most popular models, and a generic fallback template for the remaining models.
November 30, 2024 at 5:17 PM
At the moment, our algorithm is simple. We use any git and clipboard snippet we are able to collect and if there are still tokens left, we fill the prompt with import definitions from the AST.
November 30, 2024 at 5:16 PM
Next, we collect all the uncommitted git changes - they provide important context on user intent.

For example, if a user just created a ‘UserSettingsForm’ React component and now they’re on ‘UserSettingsPage.tsx’, the model can guess the user most likely wants to place the form there.
November 30, 2024 at 5:14 PM
For example, if the function is ‘createUser(user: Partial<User>)’, we need to know the type of the User object, or the model is going to hallucinate a plausible-looking type on its own.
November 30, 2024 at 5:11 PM
🔶 Place the context inside a template that the LLM can understand

🔶 Send the template to the LLM

🔶 Process LLM’s response (pass it through filters, clean it up, etc)

🔶 Display the result (most often) or kill the completion (sometimes, when the completion is subpar)
November 30, 2024 at 5:10 PM
🔷 Prefix - everything that comes before the FIM marker

🔷 Suffix - everything that comes after the FIM marker

🔷 Template - prefix, suffix, and other tokens put together, to be consumed by the LLM
November 30, 2024 at 5:08 PM
- Make it smart - autocomplete should correctly anticipate your intent
- Make it fast - getting a completion should be almost instant
- Fail gracefully - sometimes the LLM fails to provide a good completion and we should detect that situation and hide the completion from the user
November 30, 2024 at 5:07 PM
Ever wondered how AI autocomplete works? In this thread I’ll walk you through how we work with LLMs at @continuedev.bsky.social to decipher user intent and provide them with useful completions.

Continue is open source so I’ll post links to relevant code on Github at the end of this thread.
November 30, 2024 at 5:06 PM
I'm in the "polish smile" rabbit hole
November 30, 2024 at 4:41 AM
In Poland a smile is considered an unnatural facial expression and I think that's beautiful.
November 30, 2024 at 4:20 AM