Simone Civetta
viteinfinite.bsky.social
Simone Civetta
@viteinfinite.bsky.social
Installed Beta 2 on my iPhone. The Liquid Glass effect is nice, but too present in such a small screen estate. It feels somehow “baroque”.
As screens are larger, I find it much more pleasant on iPadOS and macOS.
June 24, 2025 at 9:50 AM
Foundation Models framework is beautiful.
I had the opportunity of grabbing a WWDC lab, and here’s a bunch of info you might find useful.
🧵
June 17, 2025 at 7:23 AM
Thinking about Dia (and @perplexity_ai’s Comet - which I haven’t tried yet).
But in an AI-first world, apps who have the most potential are IMO those like @raycastapp, offering a deep integration with the system that goes way beyond browsing. And they’re far from being sherlocked
June 14, 2025 at 5:40 AM
macOS Beta is usable as a daily driver on my MBPro M1. And by usable, I mean "usable". It's not perfect, but it doesn't crash. Mean issues so far: far from buttery smooth (esp. Safari, which is sluggish to say the least) + quick battery drain.
The UI is not radically different, but it's pleasant.
June 12, 2025 at 8:04 PM
As much as I loved “Letter to Arc members” by @browsercompany I feel that for my own personal use, Dia is a step in the wrong direction. I find that features like command bar (with actions) or the swipe to change the profile were true productivity enhancers.
June 11, 2025 at 4:49 PM
At this point the most disappointing Apple product is the Apple Watch. Sure they sold millions of it but it now sits in a purgatory between being too power hungry to have a long lasting battery life and but too constrained to run a SLM or proper background tasks.
June 11, 2025 at 10:17 AM
I’m glad to see such a feature-rich Foundation Models framework and its API so pleasantly “swifty”.
A simple inference API would have already been good, but FMf goes way beyond that.
June 11, 2025 at 7:19 AM
The ChatGPT integration in Xcode will certainly useful at some point. But at the moment is still ages behind Cursor (via the xcode-build-server integration, cf @dimillian.app‬'s tutorial).
June 10, 2025 at 2:11 PM
Swift Assist DOA?! #WWDC25
June 9, 2025 at 6:29 PM
Not sure Background Task APIs are the right answer but it's certainly a step in the right direction. #WWDC25
June 9, 2025 at 6:26 PM
Ok, I lied. I was also hoping for these new iPad features! #WWDC25
June 9, 2025 at 6:23 PM
Shortcuts looks neat, even as a Raycast user. #WWDC25
June 9, 2025 at 6:05 PM
To me the only meaningful update to watchOS would be a truly minimal version that would allow the battery to last at least 4 days.
June 9, 2025 at 5:45 PM
Screen Visual Intelligence hopefully available (in Europe) before Winter 2029. #WWDC25
June 9, 2025 at 5:41 PM
Ok, translations. But just give me multilingual Siri for putain sake. #WWDC25
June 9, 2025 at 5:30 PM
Not entirely sure that widgets in CarPlay are the safest idea. Looking forward to seeing how it works in real life.
That said, Siri is still, by FAR, the biggest irritant while using CarPlay.
#WWDC25
June 9, 2025 at 5:22 PM
That Liquid Glass shader is giving me shivers. #WWDC25
June 9, 2025 at 5:12 PM
Spoke too soon. The *only* API I was actually waiting for, "Foundation Model framework" is official!
June 9, 2025 at 5:08 PM
\#WWDC is starting now, and the fact that it’s kicking off with an #F1TheMovie promo brings back painful memories of that Apple Music-focused WWDC.
June 9, 2025 at 5:06 PM
Lately, I’ve been using Roocode more and more in my AI coding workflow—so much so that I miss it when working in Cursor.
My preferred workflow now starts with a planning phase before letting the AI touch a single line of code.
🧵
May 23, 2025 at 7:16 AM
Coding with AI brings lots of challenges.
Two I keep running into:
– Reading
– Memorisation
Let’s talk about them.
May 16, 2025 at 6:51 AM
🤯 @piaskowyk.bsky.social showing JS code running faster than C++!
April 2, 2025 at 11:19 AM
Reposted by Simone Civetta
It’s time for the Llama generation speed benchmark! 🚀

We’ve compared quantized Llama 3.2 1B QLoRA and the full precision model.
Results:
⚡Quantized model: 9.56s
⌛Full precision model: 19.14s

It’s 2x faster with quantization! 📈

Try it yourself with our example app ⭐
github.com/software-man...
February 13, 2025 at 3:09 PM
Choosing the right engine is key in local LLM/LMM inference; it can significantly impact speed, quality, docs, compatibility and portability.

Here's a comparison among the most starred ones as of Geb 2025: llama.cpp, MediaPipe, MLC-LLM, MLX, MNN
- PyTorch ExecuTorch.
February 11, 2025 at 7:54 AM