Shlomi Hod
hodthoughts.bsky.social
Shlomi Hod
@hodthoughts.bsky.social
Responsible AI. Previously, BU, OpenDP, Columbia, Twitter
https://shlomi.hod.xyz
he/they
📌 Key Topics Include:
- Lifecycle Uses & LLM-Driven Generation
- Safety & Robustness
- Privacy, Security & Data Governance
- Fairness, Bias & Representation
- Explainability, Interpretability & Uncertainty
- Standards, Metrics & Tooling for Trustworthy Use
- Critical Perspectives on Synthetic Data
October 8, 2025 at 1:19 PM
Foundation models increasingly leverage synthetic data for training while simultaneously generating synthetic datasets for downstream applications.

This workshop centers on the responsible development and use of synthetic data with and for foundation models
October 8, 2025 at 1:19 PM
🗓️ Submission Deadline: October 20th, 2025 AoE
October 8, 2025 at 1:19 PM
(Caught this via Terms Watch - a tool I built that monitors ToS changes across major platforms. Daily digest at termswatch.io)
Terms Watch
Track changes to Terms of Service across major platforms
termswatch.io
October 5, 2025 at 10:40 AM
This is a fascinating shift in platform power dynamics. Instead of unilateral AI scraping, we're seeing the a forced data reciprocity
October 5, 2025 at 10:40 AM
Some open source devs may want LLMs to learn their code - it could help users get support.

But this reciprocity clause might have a chilling effect
October 5, 2025 at 10:40 AM
Practical example: Google trains Gemini on GitHub code (everyone does - it's the world's largest code repo).

Under the new terms, GitHub could now access Google's public data - like YouTube videos - for their own AI training
October 5, 2025 at 10:40 AM