As screens are larger, I find it much more pleasant on iPadOS and macOS.
As screens are larger, I find it much more pleasant on iPadOS and macOS.
I had the opportunity of grabbing a WWDC lab, and here’s a bunch of info you might find useful.
🧵
I had the opportunity of grabbing a WWDC lab, and here’s a bunch of info you might find useful.
🧵
But in an AI-first world, apps who have the most potential are IMO those like @raycastapp, offering a deep integration with the system that goes way beyond browsing. And they’re far from being sherlocked
But in an AI-first world, apps who have the most potential are IMO those like @raycastapp, offering a deep integration with the system that goes way beyond browsing. And they’re far from being sherlocked
The UI is not radically different, but it's pleasant.
The UI is not radically different, but it's pleasant.
A simple inference API would have already been good, but FMf goes way beyond that.
A simple inference API would have already been good, but FMf goes way beyond that.
That said, Siri is still, by FAR, the biggest irritant while using CarPlay.
#WWDC25
That said, Siri is still, by FAR, the biggest irritant while using CarPlay.
#WWDC25
My preferred workflow now starts with a planning phase before letting the AI touch a single line of code.
🧵
My preferred workflow now starts with a planning phase before letting the AI touch a single line of code.
🧵
Two I keep running into:
– Reading
– Memorisation
Let’s talk about them.
Two I keep running into:
– Reading
– Memorisation
Let’s talk about them.
We’ve compared quantized Llama 3.2 1B QLoRA and the full precision model.
Results:
⚡Quantized model: 9.56s
⌛Full precision model: 19.14s
It’s 2x faster with quantization! 📈
Try it yourself with our example app ⭐
github.com/software-man...
We’ve compared quantized Llama 3.2 1B QLoRA and the full precision model.
Results:
⚡Quantized model: 9.56s
⌛Full precision model: 19.14s
It’s 2x faster with quantization! 📈
Try it yourself with our example app ⭐
github.com/software-man...
Here's a comparison among the most starred ones as of Geb 2025: llama.cpp, MediaPipe, MLC-LLM, MLX, MNN
- PyTorch ExecuTorch.
Here's a comparison among the most starred ones as of Geb 2025: llama.cpp, MediaPipe, MLC-LLM, MLX, MNN
- PyTorch ExecuTorch.