Simone Civetta
viteinfinite.bsky.social
Simone Civetta
@viteinfinite.bsky.social
Lately, I’ve been using Roocode more and more in my AI coding workflow—so much so that I miss it when working in Cursor.
My preferred workflow now starts with a planning phase before letting the AI touch a single line of code.
🧵
May 23, 2025 at 7:16 AM
🤯 @piaskowyk.bsky.social showing JS code running faster than C++!
April 2, 2025 at 11:19 AM
Choosing the right engine is key in local LLM/LMM inference; it can significantly impact speed, quality, docs, compatibility and portability.

Here's a comparison among the most starred ones as of Geb 2025: llama.cpp, MediaPipe, MLC-LLM, MLX, MNN
- PyTorch ExecuTorch.
February 11, 2025 at 7:54 AM
🤯 This is DeepSeek R1 Distill—a Llama-based model distilled from DeepSeek R1, running LOCALLY in your browser, at a decent ~12 tok/sec on my M1 Pro.

A project by MLC LLM (Guess where they are from?)

huggingface.co/spaces/mlc-a...
February 1, 2025 at 10:14 AM
And guess who joined our Content Team this year? @delphinebugner.bsky.social — Mobile Lead at Mistral and one of the most passionate and enthusiastic people you’ll ever meet in this community!
January 20, 2025 at 10:13 AM
This is your friendly reminder that we’re just 2 days until the end of the CfP!
👉 https://reactnativeconnection.io/cfp
January 17, 2025 at 10:16 AM
🎉 We’re finally thrilled to announce that Manuela Sakura Rommel is joining our lineup!
Manuela is Women Tech Makers Ambassador and co-org of the Berlin Flutter meetup. She has notoriously been featured in the Observable Flutter podcast by @labenz.dev.
👉 https://flutterconnection.io
January 16, 2025 at 10:06 PM
I’m testing stuff!
January 8, 2025 at 9:55 PM