Dario
banner
dariogargas.bsky.social
Dario
@dariogargas.bsky.social
Senior AI researcher at BSC. Random thinker at home.
Are you into Chip Design, EDA or just like to do RTL code for fun? Check out the largest benchmarking of LLMs for Verilog generation: TuRTLe 🐢

It includes 40 open LLMs evaluated on 4 benchmarks, following 5 tasks. And its only growing!

huggingface.co/spaces/HPAI-...

arxiv.org/abs/2504.01986
June 3, 2025 at 4:08 PM
So many healthcare LLMs, and yet so little information! Check out this table summarizing contributions, and find more details in our latest pre-print: arxiv.org/abs/2505.04388
May 22, 2025 at 12:19 PM
The Aloe Beta preprint includes full details on data & training setup.
Plus four different evaluation methods (including medical expert).
Plus a risk assessment of healthcare LLMs.

Two years of work condensed in a few pages, figures and tables.

Love open research!
huggingface.co/papers/2505....
May 21, 2025 at 8:06 AM
Last week our team presented this at NAACL. Check out the beautiful poster they put together 😍
May 6, 2025 at 4:39 PM
The recipe is simple 🧑‍🍳 :
1. A good open model 🍞
2. A properly tuned RAG pipeline 🍒

And you will be cooking a five star AI system ⭐ ⭐ ⭐ ⭐ ⭐

See you on the AIAI 2025 conference, where we will be presenting this work, done at @bsc-cns.bsky.social and @hpai.bsky.social
April 4, 2025 at 2:35 PM
How expensive 🫰 is it to get the best LLM performance? How much cash needs to burn 💸 to get reliable responses? Pareto optimal plots answer these questions.

Our research shows it is economically feasible and scalable to achieve O1 level performance at a fraction of the cost.
buff.ly/ji1VHiV
April 4, 2025 at 2:35 PM
Our LLM safety project, Egida, reached 2K downloads 😀
It includes +60K safety questions expanded with jailbreaking prompts.
The four models trained (and released) show strong signs of safety alignment and generalization capacity. Check out the 🤗 HF page and the paper for details!
buff.ly/kxFVyl2
April 1, 2025 at 9:11 PM
So many keywords around LLM training, its easy to get lost.
For an incoming paper, did this little visual summary. Would you change anything?
February 18, 2025 at 5:48 PM
Trying to put some order in LLM keywords for an incoming paper. Green concepts are in a different axis, and only partly overlap with elements in blue.
January 30, 2025 at 11:18 AM
To the editors out there, are there any serious downsides to adding an emoji to the title? The stochastic parrots paper seems to be doing alright...
January 27, 2025 at 8:56 AM
Aloe 🌱: How I Learned to Stop Worrying and Love LLMs

Finishing the journal paper with all the details right now!
huggingface.co/collections/...
January 24, 2025 at 7:08 PM
I would NOT bring a copy of "Attention is all you need". I learned that lesson from "Back to the Future".
January 21, 2025 at 11:08 AM
Answering "More than you have" to the question "How much data is needed for an AI to solve my problem?"
December 31, 2024 at 6:58 PM
Models are open. Data is open. Recipe will be published soon. Download everything and try it out here: https://buff.ly/49F9Xvq
Thanks to the HPAI team, and the people of the open models community, who made this possible.
December 13, 2024 at 10:13 PM
Aloe Beta 🌱 includes a DPO alignment with medical and generalized preferences, as well as strong red teaming to prevent dangerous outputs 😱

We work with HPC experts from bsc-cns.bsky.social to optimize the training and reduce its cost, now reaching 400 TFLOPS in the 70B version, 460 in the 7B.
December 13, 2024 at 10:13 PM
AI Daily Dose - Day 11: Loosely speaking, a bias is a pattern in the data that can be identified and used by a model. Without biases, ML models cannot operate, but under certain undesirable biases, ML can produce wrong or dangerous outputs. Only humans can separate between good and bad biases.
December 7, 2024 at 5:41 PM
AI Daily Dose - Day 10: Specifying what an AI model must learnt from a set of data scales poorly when depending on human criteria. In Machine Learning (ML) models are only told how to learn, not what. ML models exploit large volumes of data to find patterns in accordance to their programing.
December 6, 2024 at 5:41 PM
AI Daily Dose - Day 9: The famous quote "All models are wrong, some are useful" refers to the impossibility of perfectly capturing reality, due to its unlimited complexity and granularity. Instead, models should serve a specific purpose, and model reality to that end.
December 5, 2024 at 5:41 PM
One of our research engineers is at Meta’s Global Open Source Innovation Summit this week, presenting our family of fine-tuned open healthcare LLMs: Aloe

Check out the entire model family here: huggingface.co/collections/...
New, even better Aloe models based on Qwen 2.5 coming next week ;)
December 5, 2024 at 10:12 AM
AI Daily Dose - Day 8: An AI model is a process or artifact that produces an output given an input, following a pre-defined process. AI models seek to mimic some behavior or capability of human intelligence. AI, building formal representations of knowledge and reasoning.
December 4, 2024 at 5:41 PM
AI Daily Dose - Day 6: An AI agent is an autonomous entity capable of learning, making decisions, and taking actions within a defined environment. The concept of agency is crucial in AI, as it allows systems to act independently and responsibly, leading to more advanced and ethical AI applications.
December 2, 2024 at 5:41 PM
In fairness, the technology promised nothing.
November 29, 2024 at 5:41 PM
AI Daily Dose - Day 5: The concept of Embodiment posits that true intelligence is inextricably linked to a physical form, a vehicle to interact with the world. This suggests that true AI must become integrated with robotics to perceive and interact withe the world through sensors and actuators.
November 29, 2024 at 5:41 PM
AI Daily Dose - Day 4: Non-symbolic AI includes methods like SVMs, neural networks and deep learning. These enable mechanisms for processing raw world data at scale, but lack the capacity to operate with symbols, and are therefore limited for complex reasoning and planning tasks.
November 28, 2024 at 5:41 PM
AI Daily Dose - Day 3: Symbolic AI includes methods like logics and inference, ontologies and knowledge representations. These provide rich descriptors but lack automatic means to expand the model knowledge at scale.
November 27, 2024 at 5:41 PM