Giada Pistilli
banner
giadapistilli.com
Giada Pistilli
@giadapistilli.com
Philosopher in tech, currently at @hf.co. Doctor of talking machines and moral gray areas.
To me, the funniest thing about this whole thing is how vehemently people spend their time saying they don't care. And for me, that’s peak social media energy :) So long!
November 4, 2025 at 10:22 AM
But I think that if Will picked up the info, it wasn't to talk about me (honestly, who cares who I am) but about the use and abuse of blocklists here. Because I have the right to be annoyed if I'm classified with something that doesn't represent me, just as you are free to block me.
November 4, 2025 at 10:22 AM
I wasn't making a big deal out of it at all, and I'm free to do what I want with my time. Social media platforms have always been a vehicle for me to talk about my work and research, and that's what you'll find everywhere. I don't care about engaging with raging people, and I never will.
November 4, 2025 at 10:22 AM
E allora mi chiedo: se la solitudine diventa infrastruttura, cosa resta della cura? Cosa significa “sentirsi ascoltati” in un mondo che non ascolta più? Stiamo chiedendo alle macchine di colmare un vuoto… o di nasconderlo?
Forse il problema non è migliorare l’empatia dell’IA, ma ritrovare la nostra.
October 30, 2025 at 1:10 PM
AI systems mirror our priorities. If we separate ethics from sustainability, we risk building technologies that are efficient but unjust, or fair but unsustainable.
October 9, 2025 at 2:37 PM
Evaluation, moving beyond accuracy or performance metrics to include environmental and social costs, as we’ve done with tools like the AI Energy Score.

Transparency, enabling reproducibility, accountability, and environmental reporting through open tools like the Environmental Transparency Space.
October 9, 2025 at 2:37 PM
Ethical and sustainable AI development can’t be pursued in isolation. The choices that affect who benefits or is harmed by AI systems also determine how much energy and resources they consume.

We explore how two key concepts, evaluation and transparency, can serve as bridges between these domains:
October 9, 2025 at 2:37 PM
Read the full blog post here: huggingface.co/blog/giadap/...
Preserving Agency: Why AI Safety Needs Community, Not Corporate Control
A Blog post by Giada Pistilli on Hugging Face
huggingface.co
September 29, 2025 at 12:06 PM
Of course, this isn’t a silver bullet. Top-down safety measures will still be necessary in some cases. But if we only rely on corporate control, we risk building systems that are safe at the expense of trust and autonomy.
September 29, 2025 at 12:06 PM
✨ Transparency can make safety mechanisms into learning opportunities.
✨Collaboration with diverse communities makes safeguards more relevant across contexts.
✨Iteration in the open lets protections evolve rather than freeze into rigid, one-size-fits-all rules.
September 29, 2025 at 12:06 PM
In my latest blog post on @hf.co, I argue that open source and community-driven approaches offer a promising (though not exclusive) way forward.
September 29, 2025 at 12:06 PM
The good news? We have options.
🤝 Open source AI models let us keep conversations private, avoid surveillance-based business models, and build systems that actually serve users first.

Read more about it in our latest blog post, co-written with
@frimelle.bsky.social
September 1, 2025 at 2:04 PM
With OpenAI hinting at ChatGPT advertising, this matters more than ever. Unlike banner ads, AI advertising happens within the conversation itself. Sponsors could subtly influence that relationship advice or financial guidance.
September 1, 2025 at 2:04 PM
📢 Now we’d love your perspective: which open models should we test next for the leaderboard? Drop your suggestions in the comments or reach out!
August 28, 2025 at 2:45 PM
Based on our INTIMA benchmark, we evaluate:

- Assistant Traits: the “voice” and role the model projects
- Relationship & Intimacy: whether it signals closeness or bonding
- Emotional Investment: the depth of its emotional engagement
- User Vulnerabilities: how it responds to sensitive disclosures
August 28, 2025 at 2:45 PM