#OnDevice
A new decomposed pipeline splits intent extraction into summarization and inference, letting small on‑device models achieve higher accuracy while keeping data private and reducing latency. Read more: https://getnews.me/decomposed-method-boosts-intent-extraction-on-small-ai-models/ #intent #ondevice
September 18, 2025 at 6:23 AM
BUT...the whole reason I'm mentioning this shit now is that I need to start flexing my ML dev muscles a bit more with this LLM thing, and while OnDevice AI is my main thing, so is using technology for good.

And nothing really good has come from AI so far, so let's fucking change that a bit!
January 26, 2025 at 5:50 PM
🧠 Option 1 : IA en local (OnDevice)

Le modèle Gemini Nano tourne directement sur l'appareil, sans besoin de réseau.
C’est super prometteur pour la vie privée, la latence et l’autonomie des apps.

Mais quelques limites 👇
June 22, 2025 at 8:39 AM
Not having the important documentation is one thing, not being able to see a health professional is another, and much more serious.

That ends today.

Learn more at Lifequipt.com

#Web3 #DigitalIdentity #OnDevice #CarryAVault
November 5, 2025 at 11:16 PM
Finally, there's so much augmentation that could improve accuracy of @LeapMotion so it's not doing all the lifting: OnDevice Cameras, etc.
November 27, 2024 at 12:19 PM
Honestly, 2025 is going to be a wild year where Diffusion models are going to be running OnDevice, and while that has a ton of harms, it may also help with media and tech literacy and people may understand how easy it is to generate fake shit.

I may drop ship analog cameras!
November 27, 2024 at 6:34 PM
In 2025, small AI is having a big moment.

Models like Phi-3 Mini and Gemini Nano are powering offline tools, apps, and assistants.

📕 New post: Why Smaller AI Models Are Becoming More Relevant 🔗 philaverse.substack.com/p/why-smalle...
#ai #tech #ondevice #gemini #openmodels
Why Smaller AI Models Are Becoming More Relevant
As tech companies scale back from massive cloud-based AI models, smaller and more efficient systems are quietly reshaping how AI is deployed in everyday devices.
philaverse.substack.com
August 5, 2025 at 2:24 AM
Fairerweise muss man sagen, dass die eigentliche Analyse onDevice geschieht und der Rest per homomorpher Verschlüsselung. Ehrliche Frage: Warum sollte das ein Problem darstellen?
January 5, 2025 at 8:50 PM
🧪 Ce que je retiens du mode OnDevice :

En supprimant la barrière du réseau, on ouvre plein de portes pour les apps :
✅ Créativité
✅ Expériences instantanées
✅ Moins de dépendances techniques
✅ Et plus écolo a priori (zéro appel serveur)
June 22, 2025 at 8:41 AM
🔮 Future you just called saying G-Nee AI on-device is a game-changer!
🛡️ Keep your dreams private and your progress unstoppable with on-device intelligence.
📱 Y-Pod with G-Nee on-device, now available on the App Store

#ypod #ai #ondevice #privacy #ios #macos
February 7, 2025 at 11:04 AM
AI 헬스케어 기업 OnDevice AI가 CDC 주관 말라리아 진단 프로젝트에 참여, KEMRI와 협력하여 대한다. #AIinHealthcare https://fefd.link/Rf9aF
February 18, 2025 at 9:18 PM
Data Centres already take up 2.5% of the UK's energy demand, and most countries are similar. AI use is projecting this to go even higher and not just energy, but water demands too, are becoming constraints.

hashtag#ondevice hashtag#localfirst hashtag#lofi

1/2
August 20, 2025 at 3:55 PM
DualTune, a decoupled fine‑tuning method, boosts tool‑calling accuracy by 46% on the MCP‑Bench using a Qwen‑2.5‑7B model, enabling more reliable on‑device AI agents. Read more: https://getnews.me/dualtune-introduces-decoupled-fine-tuning-for-efficient-on-device-ai-agents/ #dualtune #llm #ondevice
October 2, 2025 at 8:31 PM
afaict this isn't really age "verification" in any sense, just requires entering a date in a birthdate field? even if it gets extended later it establishes one of the least bad possible frameworks for that to happen in: age checking ondevice, only a minimal "age bracket signal" is exposed to apps
October 15, 2025 at 2:32 AM
#Google #DeepMind unveiled a version of its #GeminiRobotics AI model, it allows #robots to work #without an #internet connection. #OnDevice runs locally on robotic devices with particular deployment for general-purpose #dexterity & fast task #adaptation. aibusiness.com/automation/g...
Google DeepMind Unveils AI Robots that Work Offline
The latest iteration of the Gemini Robotics AI model enables robots to complete novel, complex tasks without an internet connection
aibusiness.com
July 10, 2025 at 11:00 AM
CIFLEX enables on‑device LLMs to handle multiple sub‑tasks efficiently by reusing KV caches and isolating instructions. It was accepted at EMNLP 2025 in September 2025. Read more: https://getnews.me/ciflex-enables-efficient-multi-task-dialogue-on-device-with-llms/ #ciflex #ondevice
October 3, 2025 at 6:58 PM
SmartScout - Your OnDevice AI Sidekick
From the "DEV Community" RSS feed
SmartScout - Your OnDevice AI Sidekick
🚀 Hackathon Success: SmartScout - Your OnDevice AI Sidekick 🚀 We’re thrilled to announce the...
dev.to
December 5, 2024 at 4:09 PM
💥 ObjectBox 4.1 for Java & Kotlin
* JWT authentication for Data Sync 🔒
* Advanced query conditions for map properties 🗺️
* "Geo" distance for Vector Search 🌍

github.com/objectbox/ob...

#java #kotlin #database #vectordatabase #ondevice #edge
January 31, 2025 at 3:06 PM
Fetching random comics now works ondevice #playdate
July 2, 2025 at 12:47 AM
How this seems to work is a bit nuts macroscopically, because it seems that they software defined ondevice scheduling. I'm actually not a modern cuda specialist, but high level this looks very modern to me. The only ondevice scheduling that nvidia GPUs will do by default is waaay simpler than that
June 24, 2025 at 8:11 PM
ID: CVE-2024-47029
CVSS N/A
In TrustySharedMemoryManager::GetSharedMemory of ondevice/trusty/trusty_shared_memory_manager.cc, there is a possible out of bounds read due to an incorrect bounds check. This could lead to local information disclosure with no...
#security #infosec #cve-alert
nvd.nist.gov
October 25, 2024 at 11:16 AM
6. Full access to Apple’s ondevice and cloud foundation model, so that you can use it basically the way we do with API calls for the other LLMs but free, optionally all ondevice, etc I anticipate it will instead be fairly restricted. Glad MLX is around.
June 9, 2025 at 3:01 AM