#SLMs
📢 Our Scientific Director, Prof. Xavier Costa-Pérez, participates today at the IEEE #MSWiM2025 to offer a keynote talk on Agentic #AI for Mobile Connected Systems. 

📶 He'll present an RAG framework using SLMs/LLMs to disrupt telecoms. ➕Info: https://go.i2cat.net/MSWIM
 
#i2CATresearch
MSWiM 2025 - Keynotes
Abstract: This keynote presents a bold, transformative vision for the future of mobile connected systems based on Agentic AI. We will delve into our research on leveraging Small and Large Language Models (SLMs and LLMs) to create a new generation of intelligent, autonomous mobile networks. As a foun…
go.i2cat.net
October 28, 2025 at 7:31 AM
we just released small-and-mighty distilled PII reduction models, check them out github.com/distil-labs/...

you can run them locally (1B!) and they work as well as much bigger generic models; SLMs FTW!
GitHub - distil-labs/Distil-PII
Contribute to distil-labs/Distil-PII development by creating an account on GitHub.
github.com
October 16, 2025 at 3:29 PM
My new side project (🚧 wip): Generating snippets in VSCode with @docker.com Model Runner, #Docker #Agentic #Compose with only local #SLMs
youtu.be/4f98wCxBNoY
SNIP
YouTube video by Philippe Charrière
youtu.be
October 10, 2025 at 7:58 PM
there are actually things you can do with in-house, ethically trained SLMs that avoid the big firms, but at that point you’re hiring a handful of the people who know how to do this without the big hyperscalers.
October 10, 2025 at 12:18 AM
Yes, but it's not really a Copilot feature, it's a Windows AI feature. Much like the Settings agent; they all use local SLMs (Phi Silica and Mu) and are built using tools like the Agent Framework.
October 7, 2025 at 6:12 PM
Transformer models (both LLMs and SLMs) didn't solve any open questions but they did produce what is just about a first order approximation of a solution.

Tbf this is just another occurrence of Tesler's theorem.
October 5, 2025 at 6:28 AM
As we approach AI utopia, let us not forget that these SLMs walked so that LLMs could run
September 18, 2025 at 3:36 PM
SLMs are purpose-driven, finely tuned for specific tasks, and often outperform LLMs in low-resource settings where efficiency and adaptability matter most. While LLMs focus on scale, SLMs prioritize impact—especially in communities where access to technology is limited.
February 18, 2025 at 9:10 AM
I’m definitely stage 8, hoping to be officially stage 9 as a certified SLMS soon!
February 15, 2025 at 2:05 AM
BTW, here's the relevant NJ SLMS endorsement requirements -

state.nj.us/education/lice…
December 29, 2024 at 5:29 PM
Want to dive deeper? Download our White Paper and explore how SLMs are reshaping AI: bit.ly/EqualyzAISLM...
bit.ly
March 18, 2025 at 8:57 AM
effectiveness in enhancing the security of SLMs. We further analyze the potential security degradation caused by different SLM techniques including architecture compression, quantization, knowledge distillation, and so on. [5/6 of https://arxiv.org/abs/2502.19883v1]
February 28, 2025 at 5:55 AM
So you’ve heard of LLMs… but do you know about SLMs, or even NLMs?
@Neo4j’s Andreas Kollegger says there are exciting things happening as smaller models are relatively easy to run.

🔴 Join us for The Weekly Developer Show every Wednesday at 12.30 CET (11.30 GMT)
March 21, 2025 at 9:06 AM
Jennifer Chen, Aidar Myrzakhan, Yaxin Luo, Hassaan Muhammad Khan, Sondos Mahmoud Bsharat, Zhiqiang Shen
DRAG: Distilling RAG for SLMs from LLMs to Transfer Knowledge and Mitigate Hallucination via Evidence and Graph-based Distillation
https://arxiv.org/abs/2506.01954
June 3, 2025 at 4:25 AM
ALLMs, specifically Gemini-2.5-pro, can evaluate speaking styles of SLMs with agreement comparable to human evaluators, showing potential as automatic judges.
Audio-Aware Large Language Models as Judges for Speaking Styles
Cheng-Han Chiang, Xiaofei Wang, Chung-Ching Lin, Kevin Lin, Linjie Li, Radu Kopetz, Yao Qian, Zhendong Wang, Zhengyuan Yang, Hung-yi Lee, Lijuan Wang
arxiv.org
June 9, 2025 at 10:10 AM
language models (SLMs) to emerge long CoT. Thus, distillation becomes a practical method to enable SLMs for such reasoning ability. However, the long CoT often contains a lot of redundant contents (e.g., overthinking steps) which may make SLMs hard to [2/5 of https://arxiv.org/abs/2505.18440v1]
May 27, 2025 at 5:58 AM
IndiaAI Adds 14,000 GPUs, Count Rises to 32,000, Confirms IT Minister Ashwini Vaishnaw The procurement aims to strengthen India’s AI compute capacity, particularly for training large and small language models (LLMs and SLMs), which power most generati...

| Details | Interest | Feed |
Origin
analyticsindiamag.com
May 29, 2025 at 2:36 PM
metalinguistic explanation back to the target model bridges the gap between knowing a rule and using it. On SLMs, grammar prompting alone trims the average LLM-SLM accuracy gap by about 20%, and when paired with chain-of-thought, by 56% (13.0 pp -> [4/5 of https://arxiv.org/abs/2506.02302v1]
June 4, 2025 at 5:59 AM
Small language models (SLMs) paired with Kubernetes and Function as a Service (FaaS) have emerged as alternatives to LLMs for agentic AI use cases.
Cloud Native and Open Source Help Scale Agentic AI Workflows
Small language models (SLMs) paired with Kubernetes and Function as a Service (FaaS) have emerged as alternatives to LLMs for agentic AI use cases.
bit.ly
June 13, 2025 at 5:00 AM
Why SLMs are better for agentic systems:
Powerful for most language tasks
Faster, efficient, and easier to manage
Cheaper than LLMs for repetitive tasks
More consistent, reducing errors compared to creative LLMs
July 5, 2025 at 6:30 PM
Me suis régalé avec ton article de blog. A tel point que je vais me faire quelques tests avec Lucy d’ici peu.
J’aime beaucoup l’idée d’utiliser les LLMS/SLMS comme des orchestrateurs malins et mettre de la pertinence et de la spécialisation dans les tools.
Mais le temps manque…
August 9, 2025 at 3:49 PM
arXiv:2504.07989v1 Announce Type: new
Abstract: Small Language Models (SLMs) offer efficient alternatives to LLMs for specific domains. The 2023 TinyStories study developed an English dataset that allows SLMs with 1 to 10 million parameters to [1/6 of https://arxiv.org/abs/2504.07989v1]
April 14, 2025 at 5:55 AM
How SLMs and knowledge graphs supercharge AI ift.tt/7CK9VbR
How SLMs and knowledge graphs supercharge AI
Why real AI benefit often arrives in small model packages
ift.tt
August 2, 2025 at 10:18 AM
FRESH NEWSLETTER: Enterprises are moving away from massive LLMs & embracing smaller, specialized AI models. SLMs are the true path to sustainable, practical value—covering market trends, startup innovations & real-world benchmarks siliconsandstudio.substack.... #AI #GenAI #AItools
August 20, 2025 at 9:34 AM