Aida Nematzadeh
banner
aidanematzadeh.bsky.social
Aida Nematzadeh
@aidanematzadeh.bsky.social
Research scientist at Google DeepMind.🦎
She/her.
http://www.aidanematzadeh.me/
Most diffusion-based models use a fixed (model-tuned) guidance schedule. We show that picking the guidance value during inference, conditioned on the prompt/capability, significantly improves performance.

arxiv.org/abs/2509.16131
September 30, 2025 at 4:00 PM
Reposted by Aida Nematzadeh
Our #CVPR2025 workshop on Emergent Visual Abilities and Limits of Foundation Models (EVAL-FoMo) is taking place this afternoon (1-6pm) in room 210.

Workshop schedule: sites.google.com/view/eval-fo...
EVAL-FoMo 2 - Schedule
Date: June 11 (1:00pm - 6:00pm)
sites.google.com
June 11, 2025 at 5:55 PM
Generative models are powerful evaluators/verifiers, impacting evaluation and post-training. Yet, making them effective, particularly for highly similar models/checkpoints, is challenging. The devil is in the details.
April 23, 2025 at 9:23 PM
Reposted by Aida Nematzadeh
I know multiple people who need to hear this piped into their offices during working hours
March 10, 2025 at 5:02 AM
Reposted by Aida Nematzadeh
I wrote about why funding for the NSF and NIH is important to me and to my hometown (Charlottesville, VA) at @cvilletomorrow.bsky.social. Thank you to @sciencehomecoming.bsky.social for inspiring me to do this!

www.cvilletomorrow.org/if-federal-f...
If federal funding for science is cut, we won't just be losing the research
Jessica B. Hamrick is a Virginia success story for her career in science. That career, she writes, wouldn't have been possible without federal funding for science.
www.cvilletomorrow.org
March 6, 2025 at 5:53 PM
Reposted by Aida Nematzadeh
Our 2nd Workshop on Emergent Visual Abilities and Limits of Foundation Models (EVAL-FoMo) is accepting submissions. We are looking forward to talks by our amazing speakers that include @saining.bsky.social, @aidanematzadeh.bsky.social, @lisadunlap.bsky.social, and @yukimasano.bsky.social. #CVPR2025
🔥 #CVPR2025 Submit your cool papers to Workshop on
Emergent Visual Abilities and Limits of Foundation Models 📷📷🧠🚀✨

sites.google.com/view/eval-fo...

Submission Deadline: March 12th!
EVAL-FoMo 2
A Vision workshop on Evaluations and Analysis
sites.google.com
February 13, 2025 at 4:02 PM
Reposted by Aida Nematzadeh
if you would like to attend #ICLR2025 but have financial barriers, apply for financial assistance!

our priority categories are student authors, and contributors from underrepresented demographic groups & geographic regions.

deadline is march 2nd.

iclr.cc/Conferences/...
ICLR 2025 Financial Assistance
iclr.cc
January 21, 2025 at 2:34 PM
Reposted by Aida Nematzadeh
Our representational alignment workshop returns to #ICLR2025! Submit your work on how ML/cogsci/neuro systems represent the world & what shapes these representations 💭🧠🤖

w/ @thisismyhat.bsky.social @dotadotadota.bsky.social, @sucholutsky.bsky.social @lukasmut.bsky.social @siddsuresh97.bsky.social
🚨Call for Papers🚨
The Re-Align Workshop is coming back to #ICLR2025

Our CfP is up! Come share your representational alignment work at our interdisciplinary workshop at
@iclr-conf.bsky.social

Deadline is 11:59 pm AOE on Feb 3rd

representational-alignment.github.io
January 16, 2025 at 11:35 PM
The RE application is now open: boards.greenhouse.io/deepmind/job...

And here is the link to the RS position:
boards.greenhouse.io/deepmind/job...
January 8, 2025 at 12:23 PM
Reposted by Aida Nematzadeh
What was the most impactful/visible/useful release on evaluation in AI in 2024?
January 6, 2025 at 12:10 PM
Reposted by Aida Nematzadeh
Bye, Felix – Kyunghyun Cho
kyunghyuncho.me
January 2, 2025 at 10:42 AM
Reposted by Aida Nematzadeh
A brilliant colleague and wonderful soul Felix Hill recently passed away. This was a shock and in an effort to sort some things out, I wrote them down. Maybe this will help someone else, but at the very least it helped me. Rest in peace, Felix, you will be missed. www.janexwang.com/blog/2025/1/...
Felix — Jane X. Wang
From the moment I heard him give a talk, I knew I wanted to work with Felix . His ideas about generalization and situatedness made explicit thoughts that had been swirling around in my head, incohe...
www.janexwang.com
January 3, 2025 at 4:02 AM
Reposted by Aida Nematzadeh
Felix Hill was such an incredible mentor — and occasional cold water swimming partner — to me. He's a huge part of why I joined DeepMind and how I've come to approach research. Even a month later, it's still hard to believe he's gone.
January 2, 2025 at 7:01 PM
Reposted by Aida Nematzadeh
It seems to me that the time is ripe for a Bluesky thread about how—and maybe even why—to befriend crows.

(1/n)
August 20, 2023 at 1:55 AM
Reposted by Aida Nematzadeh
Here's Veo 2, the latest version of our video generation model, as well as a substantial upgrade for Imagen 3 🧑‍🍳🚢

(Did I mention we are hiring on the Generative Media team, btw 👀)

blog.google/technology/g...
State-of-the-art video and image generation with Veo 2 and Imagen 3
We’re rolling out a new, state-of-the-art video model, Veo 2, and updates to Imagen 3. Plus, check out our new experiment, Whisk.
blog.google
December 16, 2024 at 5:35 PM
Reposted by Aida Nematzadeh
I've been getting a lot of questions about autoregression vs diffusion at #NeurIPS2024 this week! I'm speaking at the adaptive foundation models workshop at 9AM tomorrow (West Hall A), about what happens when we combine modalities and modelling paradigms.
adaptive-foundation-models.org
NeurIPS 2024 Workshop on Adaptive Foundation Models
adaptive-foundation-models.org
December 14, 2024 at 4:02 AM
What do text-to-image models know about numbers? Find out in our new paper 🦎 "Evaluating Numerical Reasoning in text-to-image Models" to be presented at #NeurIPS2024 (Wed 4:30-7:30 PM, #5304).

Dataset: github.com/google-deepm... (1386 prompts, 52,721 images, 479,570 annotations)
GitHub - google-deepmind/geckonum_benchmark_t2i: GeckoNum Benchmark for T2I Model Eval.
GeckoNum Benchmark for T2I Model Eval. Contribute to google-deepmind/geckonum_benchmark_t2i development by creating an account on GitHub.
github.com
December 9, 2024 at 7:08 PM
Reposted by Aida Nematzadeh
Stop by our #NeurIPS tutorial on Experimental Design & Analysis for AI Researchers! 📊

neurips.cc/virtual/2024/tutorial/99528

Are you an AI researcher interested in comparing models/methods? Then your conclusions rely on well-designed experiments. We'll cover best practices + case studies. 👇
NeurIPS Tutorial Experimental Design and Analysis for AI ResearchersNeurIPS 2024
neurips.cc
December 7, 2024 at 6:15 PM
Reposted by Aida Nematzadeh
If you will be at #NeurIPS2024 @neuripsconf.bsky.social and would like to come see our models in action, come say hi 👋 and check out our demo at the GDM booth!

Wednesday, Dec. 11th @ 9:30-10:00.

Lots of other great things to see as well! Check it out: 👇
deepmind.google/discover/blo...
December 6, 2024 at 12:42 PM
I am hiring for RS/RE positions! If you are interested in language-flavored multimodal learning, evaluation, or post-training apply here 🦎 boards.greenhouse.io/deepmind/job...

I will also be #NeurIPS2024 so come say hi! (Please email me to find time to chat)
Research Scientist, Language
London, UK
boards.greenhouse.io
December 6, 2024 at 11:07 PM
Reposted by Aida Nematzadeh
Our big_vision codebase is really good! And it's *the* reference for ViT, SigLIP, PaliGemma, JetFormer, ... including fine-tuning them.

However, it's criminally undocumented. I tried using it outside Google to fine-tune PaliGemma and SigLIP on GPUs, and wrote a tutorial: lb.eyer.be/a/bv_tuto.html
December 3, 2024 at 12:18 AM