EvalEval Coalition
banner
eval-eval.bsky.social
EvalEval Coalition
@eval-eval.bsky.social
We are a researcher community developing scientifically grounded research outputs and robust deployment infrastructure for broader impact evaluations.

https://evalevalai.com/
📜Paper: arxiv.org/pdf/2511.056...
📝Blog: tinyurl.com/blogAI1

🤝At EvalEval, we are a coalition of researchers working towards better AI evals. Interested in joining us? Check out: evalevalai.com 7/7 🧵
arxiv.org
November 13, 2025 at 1:59 PM
Continued..

📉 Reporting on social impact dimensions has steadily declined, both in frequency and detail, across major providers
🧑‍💻 Sensitive content gets the most attention, as it’s easier to define and measure

🛡️Solution? Standardized reporting & safety policies (6/7)
November 13, 2025 at 1:59 PM
Key Takeaways:

⛔️ First-party reporting is often sparse & superficial, with many reporting NO social impact evals
📉 On average, first-party scores are far lower than third-party evals (0.72 vs 2.62/3)
🎯 Third parties provide some complementary coverage (GPT-4 and LLaMA) (5/7)
November 13, 2025 at 1:59 PM
💡 We also interviewed developers from for-profit and non-profit orgs to understand why some disclosures happen and why others don’t.

💬 TLDR: Incentives and constraints shape reporting (4/7)
November 13, 2025 at 1:59 PM
📊 What we did:

🔎 Analyzed 186 first-party release reports from model developers & 183 post-release evaluations (third-party)
📏 Scored 7 social impact dimensions: bias, harmful content, performance disparities, environmental costs, privacy, financial costs, & labor (3/7)
November 13, 2025 at 1:59 PM
While general capability evaluations are common, social impact assessments, covering bias, fairness, and privacy, etc., are often fragmented or missing. 🧠

🎯Our goal: Explore the AI Eval landscape to answer who evaluates what and identify gaps in social impact evals!! (2/7)
November 13, 2025 at 1:59 PM
Note: General registration is constrained by space capacity! Please note that attendance will be confirmed by the organizers based on space availability. Accepted posters will be invited to register for free and attend the workshop in person!
November 6, 2025 at 9:19 PM
📮 We are inviting students and early-stage researchers to submit an Abstract (Max 500 words) to be presented as posters during interactive session. Submit here: tinyurl.com/AbsEval

We have a rock-star lineup of AI researchers and an amazing program. Please RSVP at the earliest! Stay tuned!
November 6, 2025 at 9:19 PM
💡This paper was brought to you as part of our spotlight series featuring papers on evaluation methods & datasets, the science of evaluation, and many more.

📸Interested in working on better AI evals? Join us: evalevalai.com
October 31, 2025 at 3:47 PM
🚫 The approach also avoids mislabeled data and delays benchmark saturation, continuing to distinguish model improvements even at high performance levels.

📑Read more: arxiv.org/abs/2509.11106
October 31, 2025 at 3:47 PM
📊Results & Findings

🧪 Experiments across 6 LLMs and 6 major benchmarks:

🏃Fluid Benchmarking outperforms all baselines across all four evaluation dimensions: efficiency, validity, variance, and saturation.
⚡️It achieves lower variance with up to 50× fewer items needed!!
October 31, 2025 at 3:47 PM
It combines two key ideas:

✍️Item Response Theory: Models LLM performance in a latent ability space based on item difficulty and discrimination across models
🧨Dynamic Item Selection: Adaptive benchmarking-weaker models get easier items, while stronger models face harder ones
October 31, 2025 at 3:47 PM
🔍How to address this? 🤔

🧩Fluid Benchmarking: This work proposes a framework inspired by psychometrics that uses Item Response Theory (IRT) and adaptive item selection to dynamically tailor benchmark evaluations to each model’s capability level.

Continued...👇
October 31, 2025 at 3:47 PM
⚠️ Evaluation results can be noisy and prone to variance & labeling errors.
🧱As models advance, benchmarks tend to saturate quickly, reducing their longterm usefulness.
🪃Existing approaches typically tackle just one of these problems (e.g., efficiency or validity)

What now⁉️
October 31, 2025 at 3:47 PM
💣Current SOTA benchmarking setups face several systematic issues:

📉It’s often unclear which benchmark(s) to choose, while evaluating on all available ones is too expensive, inefficient, and not always aligned with the intended capabilities we want to measure.

More 👇👇
October 31, 2025 at 3:47 PM
💡This is part of our new weekly spotlight series that will feature papers on evaluation methods & datasets, the science of evaluation, and many more.

📷 Interested in working on better AI evals? Check out: evalevalai.com
EvalEval Coalition
We are a researcher community developing scientifically grounded research outputs and robust deployment infrastructure for broader impact evaluations.
evalevalai.com
October 24, 2025 at 4:44 PM
🏗️Therefore, fixing leaderboard design, e.g., private eval sets, provenance checks, randomized human tests, etc., is critical for AI ecosystem security and safety

Read more: arxiv.org/pdf/2507.08983
arxiv.org
October 24, 2025 at 4:44 PM
📊Key insights

🗳️Popular leaderboards (e.g., ChatArena, MTEB) can be exploited to distribute poisoned LLMs at scale
🔐Derivative models (finetuned, quantized, “abliterated”) are easy backdoor vectors. For instance, unsafe LLM variants often get downloaded as much as originals!

Continued...
October 24, 2025 at 4:44 PM
🔍 Method:

🧮Introduces TrojanClimb, a framework showing how attackers can:

⌨️ Simulate leaderboard attacks where malicious models achieve high test scores while embedding harmful pay loads (4 modalities)
🔒 Leverage stylistic watermarks/tags to game voting-based leaderboards
October 24, 2025 at 4:44 PM
💡This spotlight series will feature papers on evaluation methods & datasets, the science of evaluation, and many more. Stay tuned!

🤝 Interested in working on better AI evals? We are a coalition of researchers working towards better AI evals. Check out: evalevalai.com
EvalEval Coalition
We are a researcher community developing scientifically grounded research outputs and robust deployment infrastructure for broader impact evaluations.
evalevalai.com
October 17, 2025 at 4:15 PM
🧮 Benchmark Saturation != Reliability. Models achieve near-perfect scores without demonstrating true reliability.

📢 Highlights the gap between apparent competence & dependable reliability - therefore systematic reliability testing is needed.

Read more at: arxiv.org/pdf/2502.03461
arxiv.org
October 17, 2025 at 4:15 PM