Tom Hipwell
banner
tomhipwell.co
Tom Hipwell
@tomhipwell.co
VP Engineering at nory.ai. Past roles: Deel/Hofy, Bulb, JPMorgan. Learning. Shipping.
Whoah that's amazing - how did you do it? Definitely worthy of a blog post.
November 9, 2025 at 9:18 PM
Typical breakage is about 6% of your fleet annually. One of the better buys you can do as CTO is buy folks a laptop stand as it lifts the laptop off the desk and you remove most of one class of damage cheaply (spillage)
May 12, 2025 at 8:28 PM
Great post, loved this one. Have done that subprocessor trick before but never internalised it enough to repeat it like you have. Lesson received! There's probably a nice hack for someone there to go document a bunch of ai stacks easily, especially when it's also seemingly so heavily vendored.
May 11, 2025 at 7:53 PM
I think in the first post they said it wasn't the first time they'd used the thumbs up/down as a reward signal, but what had gone wrong was that they'd treated all user tenures the same, with the implication being that aged users gave much better signal, matches intuition but good insight I thought
May 2, 2025 at 9:19 PM
What are you using for the remote control of your agents from your phone?
April 27, 2025 at 8:26 PM
I expanded this whole thread a little as a blog post this evening -> tomhipwell.co/blog/cursor_...
Cursor rules, prompt injections, voice to text and Diane | Tom Hipwell
Learning in the open | Tom Hipwell
tomhipwell.co
March 23, 2025 at 8:50 PM
Agree, I think a core bit of AI engineering is knowing whether you need the feature or not, and choosing appropriately.
March 22, 2025 at 8:18 AM
There's a great Matt Webb post from this week which explains really vividly the utility of the pattern, it's a feature not a bug -> interconnected.org/home/2025/03...
Diane, I wrote a lecture by talking about it
Posted on Thursday 20 Mar 2025. 831 words, 8 links. By Matt Webb.
interconnected.org
March 21, 2025 at 8:19 PM
You don't control the context in copilot. A hypothesis would be that most cynical devs are not using chat mode. They're opted into copilot at the org level, they're using the default model. The value they're getting is short completions which are often off the ball, hence "quicker to write myself"
March 2, 2025 at 7:39 AM
I think the interaction mode makes a difference. When folks tend to complain about hallucinations it's not due to code generated in a chat mode, it's copilot completions. A short completion with a hallucination is one you end up dismissing all the time.
March 2, 2025 at 7:27 AM
What are the T&C's though, are they training on your code?
February 25, 2025 at 8:25 PM
Something like that anyway. It's still early days so you are not behind the curve.
February 2, 2025 at 8:09 AM
Then once you've finished and you've started to work out how to prompt for better results, move to a domain/task where you are expert, you'll instantly find the flaws in what the model returns, but with the prompting skills to start to work around those limitations to fit the model to your workflow
February 2, 2025 at 8:02 AM
I think I'd start with something fun where you are non-expert, the breadth of application is like the breadth of content on YouTube - it could be anything. If you choose learning a new skill/a project then that'll teach you gently at the same time how to prompt for the best results as you get deeper
February 2, 2025 at 7:56 AM
That tip at the end to use the TTS to play back and listen for naturalness is 👨‍🍳
February 2, 2025 at 7:49 AM