Matt Wallace
m9e.bsky.social
Matt Wallace
@m9e.bsky.social
CTO building #AI
as an AI founder who is ridiculously all in, when I’m like flabbergasted by the temerity of your product I just can’t even…
June 5, 2025 at 6:18 PM
Claude did not understand the mission and tried to write his system prompt to Notion. haha!
May 6, 2025 at 4:50 PM
Wild and wonderful watching a keynote demo 100ft wide and being able to literally picture the code in your head. 😁
April 1, 2025 at 5:04 PM
Jensen: "I'd never buy a hopper!"

Azure: "We don't have any Ampere GPUs to turn up even."

Me: 😠
March 19, 2025 at 6:45 PM
640KB ought to be enough for anybody.
March 15, 2025 at 6:55 AM
Side note - when I was chiming in on github and actually, I think, triggered gg to start merging this back in, I remember I was doing 32B_Q8 w/ 7B 4KL draft but I think I still had mine set to --draft 5; I will say 7B>>>>3B so far. I may have to play around with using some even smaller drafts
March 11, 2025 at 1:10 PM
32B Coder-Q8 w/ and w/out 7B-Q4_K_L draft - PSA speculative decoding is in llamacpp and works. (depending on your hardware, experiment w/ diff model sizes - ymmv vary wildly)
March 11, 2025 at 12:50 PM
ok. told claude 3.7 to extra some settings mgmt components from an app layer into generic fastapi router+react component. I expected that to work. Him writing this beautiful readme with emojis I did *NOT* expect.
February 25, 2025 at 12:08 AM
🤣❤️
January 31, 2025 at 12:18 AM
heheh.
January 24, 2025 at 2:01 PM
Even stronger implication:
January 24, 2025 at 1:56 PM
January 10, 2025 at 4:13 PM
It's Friday, It's Friday, 🍸 & GPU time.
January 4, 2025 at 4:43 AM
January 3, 2025 at 1:05 PM
then this was a ~easy question. Which makes it a bad problem. Whether you could norm your test in such a way that it didn't look bad it was still definitely bad, because it was too domain/cultural specific.

Which is interesting because leetcode is a shibboleth. Don't trust me.
January 3, 2025 at 1:05 PM
** I could have said "LLMs" or even "AI" but I don't want to get too bogged down here in the implementation. The synthesis of application of knowledge across use cases. Side note: I told o1-pro to pro/con debate my statement. ;)
December 31, 2024 at 1:49 PM
GPTs are to knowledge what moveable type was to books.
December 31, 2024 at 1:47 PM
yes.

cdn.openai.com/spec/model-s...

but folks are training hard to stop it.
December 30, 2024 at 2:31 PM
omg I am dying. Claude takes "output as bash" as "pretend to be my bash shell" instead of "output as bash script" 😂 I'm going to take it home and feed it and keep it forever!
December 30, 2024 at 1:24 PM
After downloading like 60+ artifacts click-to-download, using github.com/m9e/aishell to move them all to the right place after "head -5 *.*" to put the target into the llm context. AIshell is a toy but sometimes hilariously useful
December 8, 2024 at 12:21 AM
At conference and why I love my vision pro, in a picture.
December 2, 2024 at 7:36 PM
Well @kamiwaza-ai.bsky.social getting pretty robust eval suite next release. Time to crank it on the big boy.
December 2, 2024 at 1:12 AM
Surprised. Picked this up (and Phlebas, which I finished) after Dario Amodei referenced it in that essay, and the premise seemed interesting.
December 1, 2024 at 4:36 PM
Marco-o1 is no o1-preview, gonna say that.
November 22, 2024 at 11:33 PM
The baby got chocolate on my hoodie. This is now how I'm staying warm in the back of the car on this trip.

Code or perish.
November 20, 2024 at 8:55 AM