Reyhaneh Aghaei Saem
Reyhaneh Aghaei Saem
@reyhanehaghaeisaem.bsky.social
PhD student at EPFL
Many thanks to all my collaborators, Behrang Tafreshi,
@qzoeholmes.bsky.social, and @supanut-thanasilp.bsky.social. 😊

This work would not have been possible without their support, and it was a truly amazing experience.
August 1, 2025 at 10:24 AM
It is important to note that while our results highlight that some methods hoped to avoid exponential concentration still suffer the effects of exponential concentration, this does not imply they cannot be used in any way to boost scalability or provide alternative training benefits.
August 1, 2025 at 10:19 AM
Hence, some proposals—such as forms of quantum natural gradient, CVaR, classical NN-assisted initialization, and rescaled gradient optimization—that were hoped to mitigate BP may still suffer from it. We provide numerical simulations of these optimization methods under different shot budgets.
August 1, 2025 at 10:16 AM
At the end, we present a practical step-by-step guideline to determine whether a training or encoding procedure is vulnerable to scalability limitations due to exponential concentration.
August 1, 2025 at 10:14 AM
To illustrate its practicality, we apply it to the training of a standard VQA loss with the BP landscape using a traditional gradient-based method. Then, we can show that the optimization trajectory will follow a random walk trajectory with high probability.
August 1, 2025 at 10:13 AM
Here is a very nice schematic figure of this key result.
August 1, 2025 at 10:11 AM
Using this definition, we can pin down the practical consequences of the concentration in Theorem 1, using tools from hypothesis testing.

The direct consequence of this result is that no classical post-processing removes this indistinguishability.
August 1, 2025 at 10:10 AM
The scalability of such procedures depends on whether measurement outcomes carry information about the variables. This motivates a shift in focus: rather than analyzing concentration at the level of the loss function, we study it at the level of POVM outcome probabilities for individual quantities.
August 1, 2025 at 10:08 AM
To formalize this, we consider a general procedure that covers a wide range of parameterized quantum models. In particular, many procedures used in variational quantum computing involve processing sets of parameter-dependent quantities.
August 1, 2025 at 10:01 AM
Here, by analyzing concentration at the level of measurement outcome probabilities and leveraging tools from hypothesis testing, we develop a practical framework for diagnosing whether a parameterized quantum model is inhibited by exponential concentration.
August 1, 2025 at 9:57 AM
There are an increasingly large number of proposals for circumventing exponential concentration. However, given the subtle interplay between quantum measurements and classical processing strategies, care needs to be taken to determine whether these approaches do in fact help in practice.
August 1, 2025 at 9:55 AM