Rafael Pardinas
muchomuchacho.bsky.social
Rafael Pardinas
@muchomuchacho.bsky.social
AI (NLP, RL) Researcher at ServiceNow Research (London, UK)
Pinned
Large-scale RL with LLMs you say? #RL #LLMs

huggingface.co/blog/Service...
PipelineRL
A Blog post by ServiceNow on Hugging Face
huggingface.co
Large-scale RL with LLMs you say? #RL #LLMs

huggingface.co/blog/Service...
PipelineRL
A Blog post by ServiceNow on Hugging Face
huggingface.co
April 26, 2025 at 2:06 PM
Excellent points, I completely agree.
I hate the way articles about BSky keep framing choice to move here in terms of left/right and echo chambers. Fundamentally, the bigger issues for me are wanting to see links/current news, wanting to see posts that aren't rage-baiting/content farming, and wanting actually diverse conversations.
November 27, 2024 at 10:40 PM
Reposted by Rafael Pardinas
This is actually a really cool interview, wish I could read it properly (here's a google translate link: mp-weixin-qq-com.translate.goog/s/r9zZaEgqAa...)
November 25, 2024 at 5:57 PM
November 25, 2024 at 6:00 PM
Making Structured Generation Faster Than Unstructured
blog.dottxt.co
November 25, 2024 at 12:24 PM
Anyone using this tool regularly? I'd like to know pros/cons:

github.com/bodaay/Huggi...
GitHub - bodaay/HuggingFaceModelDownloader: Simple go utility to download HuggingFace Models and Datasets
Simple go utility to download HuggingFace Models and Datasets - bodaay/HuggingFaceModelDownloader
github.com
November 25, 2024 at 10:31 AM
November 23, 2024 at 1:26 PM
R1-lite is a maths-beast.
November 23, 2024 at 1:23 PM
Reposted by Rafael Pardinas
About to send this to my newsletter list tomorrow morning.

Did I miss any important points?

justinjackson.ca/twitter-blue...
Leaving Twitter for Bluesky
After 15 years and 50,000 tweets on Twitter/X, I'm moving to Bluesky, which (in the last month) has become 1,000x more fun than X.
justinjackson.ca
November 23, 2024 at 7:15 AM
AI inference is like compression for thought. Faster paths from input to insight mean AI can explore more possibilities.
November 23, 2024 at 11:09 AM
AI inference isn't just about speed—it's about depth. Every microsecond saved in computation creates space for deeper reasoning chains, turning rapid pattern matching into nuanced understanding. We're not just making AI faster, we're making it think better. #AI #ML
November 22, 2024 at 7:49 PM
Reposted by Rafael Pardinas
Our new blog post is out!

@willkurt.bsky.social provides a rebuttal for a reasonably well known paper which concluded that structured generation with LLMs always resulted in worse performance.

We do not find the same thing.

blog.dottxt.co/say-what-you...
November 21, 2024 at 6:23 PM
@karpathy.bsky.social You should revive your account here. We miss your content. #AI
November 21, 2024 at 1:45 PM
Mistral has entered the chat
Search, vision, ideation, coding… all yours for free.
mistral.ai
November 19, 2024 at 4:53 PM
November 19, 2024 at 1:52 PM
Testing the waters over here. It had to happen. I hope more of the AI research community moves over here soon!
November 15, 2024 at 10:58 AM