Parameter Lab
parameterlab.bsky.social
Parameter Lab
@parameterlab.bsky.social
Empowering individuals and organisations to safely use foundational AI models.

https://parameterlab.de
We challenge the view that reasoning traces are a safe internal part of a model’s process. Our work shows they can leak information, through both deliberate attacks and accidental leakage.

RTAI: researchtrend.ai/papers/2506....
ArXiv: arxiv.org/abs/2506.15674
Code: github.com/parameterlab...

2/2
Leaky Thoughts: Large Reasoning Models Are Not Private Thinkers
We study privacy leakage in the reasoning traces of large reasoning models used as personal agents. Unlike final outputs, reasoning traces are often assume...
researchtrend.ai
August 21, 2025 at 3:14 PM
Work done with: Haritz Puerto, Martin Gubri ‪‪@mgubri.bsky.social‬ , Tommaso Green, Sangdoo Yun and Seong Joon Oh @coallaoh.bsky.social
#SEO #AI #LLM #GenerativeAI #Marketing #DigitalMarketing #Perplexity #NLProc
June 23, 2025 at 4:38 PM
Key takeaways:
❌ C-SEO doesn’t help improve visibility in AI answers.
🔎 Traditional SEO is your tool for online visibility.
🚀 Our benchmark sets the stage to develop C-SEO methods that might work in the future.
June 23, 2025 at 4:38 PM
🔎 The results are clear: current C-SEO strategies don’t work. This challenges the recent hype and suggests that creators don’t need to game LLMs and create even more clickbaits. Just focus on producing genuinely good content and let traditional SEO do its work.
June 23, 2025 at 4:38 PM
C-SEO Bench evaluates Conversational Search Engine Optimization (C-SEO) techniques on two key tasks:
🔍 Product Recommendation
❓ Question Answering
Spanning multiple domains, it tests both domain-specific performance and the generalization of C-SEO methods.
June 23, 2025 at 4:38 PM
💥 With the rise of conversational search, a new technique of "Conversational SEO" (C-SEO) emerged, claiming it can boost content inclusion in AI-generated answers. We put these claims to the test by building C-SEO Bench, the first comprehensive benchmark to rigorously evaluate these new strategies.
June 23, 2025 at 4:38 PM
Ready to Join? Send your resume + a short note on why you’re a great fit to recruit@parameterlab.de.
Be part of a team that’s redefining research with AI! #Hiring #DataEngineer #AI #RemoteJobs
February 14, 2025 at 4:08 PM

Why Join Us?
🚀 Make a Difference – Your work directly enhances how research is shared and discovered.
🌍 Flexibility – Choose full-time or part-time, work remotely or locally.
⚡ Innovative Environment – AI, research, and data-driven solutions all in one place.
🤝 Great Team
February 14, 2025 at 4:08 PM
What You Bring:
✅ Proficiency in Airflow & PostgreSQL – Complex workflows and databases.
✅ Strong Python Skills – Clean, efficient, and maintainable code is your thing.
✅ (Bonus) Experience with LLMs – A huge plus as we integrate AI-driven solutions.
✅ Problem-Solving Mindset
✅ Team Spirit
February 14, 2025 at 4:08 PM
What You’ll Do:
✔ Build Scalable Data Pipelines – Design and optimize workflows using tools like Airflow.
✔ Work Closely with AI Experts & Engineers – Collaborate to solve real-world data challenges.
✔ Optimize and Maintain Systems – Keep our data infrastructure fast, secure, and adaptable.
February 14, 2025 at 4:08 PM
Our LLM-powered ecosystem also bridges the gap between cutting-edge research and industry leaders. If you're passionate about data, AI, and making an impact, we’d love to have you on board!
February 14, 2025 at 4:08 PM
🙌 Team Credits: This research was conducted by Haritz Puerto @mgubri.bsky.social @oodgnas.bsky.social and @coallaoh.bsky.social with support from NAVER AI Lab. Stay tuned for more updates! 🚀
November 19, 2024 at 9:15 AM
🤓 Want More? Check out the community page of MIA for LLMs in http://ReserachTrend.AI https://researchtrend.ai/communities/MIALM You can see related works, the evolution of the community, and top authors!
November 19, 2024 at 9:15 AM
💬 What Do You Think? Could MIA reach a level where data owners use it as legal evidence? How might this affect LLM deployment? Let us know! #AI #LLM #NLProc
November 19, 2024 at 9:15 AM
🌐 Implications for Data Privacy: Our findings have real-world relevance for data owners worried about unauthorized use of their content in model training. It can also be used to support accountability of LLM evaluation in end-tasks.
November 19, 2024 at 9:15 AM