Zoe Holmes
@qzoeholmes.bsky.social
Quantum physicist. Assistant Prof at EPFL. Climber.
Reposted by Zoe Holmes
A new article introduces a continuous-variable analogue of the Pauli propagation algorithm, called Displacement Propagation. Surprisingly, non-Gaussianity and symplectic coherence can make the system easier to simulate when noise is present.
arxiv.org/abs/2510.07264
arxiv.org/abs/2510.07264
When quantum resources backfire: Non-gaussianity and symplectic coherence in noisy bosonic circuits
Analyzing the impact of noise is of fundamental importance to understand the advantages provided by quantum systems. While the classical simulability of noisy discrete-variable systems is increasingly...
arxiv.org
October 9, 2025 at 8:41 PM
A new article introduces a continuous-variable analogue of the Pauli propagation algorithm, called Displacement Propagation. Surprisingly, non-Gaussianity and symplectic coherence can make the system easier to simulate when noise is present.
arxiv.org/abs/2510.07264
arxiv.org/abs/2510.07264
Oh yeah. Strange. It was working (for me) earlier but not now.
Thanks for highlighting.
Let‘s try this: arxiv.org/abs/2510.01154
Thanks for highlighting.
Let‘s try this: arxiv.org/abs/2510.01154
Advantage for Discrete Variational Quantum Algorithms in Circuit Recompilation
The relative power of quantum algorithms, using an adaptive access to quantum devices, versus classical post-processing methods that rely only on an initial quantum data set, remains the subject of ac...
arxiv.org
October 2, 2025 at 5:50 PM
Oh yeah. Strange. It was working (for me) earlier but not now.
Thanks for highlighting.
Let‘s try this: arxiv.org/abs/2510.01154
Thanks for highlighting.
Let‘s try this: arxiv.org/abs/2510.01154
This is the paper: scirate.com/arxiv/2510.0...
Thanks for the fun collaboration Sasha (
@sheffield-qc.bsky.social
) and Chiddy !
Thanks for the fun collaboration Sasha (
@sheffield-qc.bsky.social
) and Chiddy !
scirate.com
October 2, 2025 at 1:27 PM
This is the paper: scirate.com/arxiv/2510.0...
Thanks for the fun collaboration Sasha (
@sheffield-qc.bsky.social
) and Chiddy !
Thanks for the fun collaboration Sasha (
@sheffield-qc.bsky.social
) and Chiddy !
FYI, our results here don’t contradict arXiv:2312.09121 which focus on loss estimation, proofs, and continuous protocols
Of these, the most intriguing/significant is the switch to discrete optimization
& maybe the path to adaptive quantum advantage is all about finding those discrete sweet spots 😉
Of these, the most intriguing/significant is the switch to discrete optimization
& maybe the path to adaptive quantum advantage is all about finding those discrete sweet spots 😉
October 2, 2025 at 1:23 PM
FYI, our results here don’t contradict arXiv:2312.09121 which focus on loss estimation, proofs, and continuous protocols
Of these, the most intriguing/significant is the switch to discrete optimization
& maybe the path to adaptive quantum advantage is all about finding those discrete sweet spots 😉
Of these, the most intriguing/significant is the switch to discrete optimization
& maybe the path to adaptive quantum advantage is all about finding those discrete sweet spots 😉
And we have found one… we provide numerical evidence that our problem lives in the Goldilocks zone:
- trainable (no exponential concentration)
- not classically surrogatable (thanks to high entanglement + magic)
- trainable (no exponential concentration)
- not classically surrogatable (thanks to high entanglement + magic)
October 2, 2025 at 1:23 PM
And we have found one… we provide numerical evidence that our problem lives in the Goldilocks zone:
- trainable (no exponential concentration)
- not classically surrogatable (thanks to high entanglement + magic)
- trainable (no exponential concentration)
- not classically surrogatable (thanks to high entanglement + magic)
Of course, this being quantum, we face two extra demons…
- Exponential concentration (barren plateaus + shot noise)
- Classical surrogation (can classical shadows fake the landscape?)
For a real separation, we need a sweet spot that dodges both.
- Exponential concentration (barren plateaus + shot noise)
- Classical surrogation (can classical shadows fake the landscape?)
For a real separation, we need a sweet spot that dodges both.
October 2, 2025 at 1:23 PM
Of course, this being quantum, we face two extra demons…
- Exponential concentration (barren plateaus + shot noise)
- Classical surrogation (can classical shadows fake the landscape?)
For a real separation, we need a sweet spot that dodges both.
- Exponential concentration (barren plateaus + shot noise)
- Classical surrogation (can classical shadows fake the landscape?)
For a real separation, we need a sweet spot that dodges both.
More concretely, we show that for a range of moderate entangling strengths the landscape is unimodal but non-separable landscapes.
Then numerics show adaptive hill-climbing converges efficiently
But non-adaptive approaches - blow up exponentially.
Then numerics show adaptive hill-climbing converges efficiently
But non-adaptive approaches - blow up exponentially.
October 2, 2025 at 1:23 PM
More concretely, we show that for a range of moderate entangling strengths the landscape is unimodal but non-separable landscapes.
Then numerics show adaptive hill-climbing converges efficiently
But non-adaptive approaches - blow up exponentially.
Then numerics show adaptive hill-climbing converges efficiently
But non-adaptive approaches - blow up exponentially.
We translate that logic into a quantum recompilation task
The hidden string = the placement of T-gates between layers of semi-random unitaries
Goal = uncover the T gates positions
As in LeadingOnes, identifying early T-gates helps you make progress, but you can’t optimize each gate independently
The hidden string = the placement of T-gates between layers of semi-random unitaries
Goal = uncover the T gates positions
As in LeadingOnes, identifying early T-gates helps you make progress, but you can’t optimize each gate independently
October 2, 2025 at 1:23 PM
We translate that logic into a quantum recompilation task
The hidden string = the placement of T-gates between layers of semi-random unitaries
Goal = uncover the T gates positions
As in LeadingOnes, identifying early T-gates helps you make progress, but you can’t optimize each gate independently
The hidden string = the placement of T-gates between layers of semi-random unitaries
Goal = uncover the T gates positions
As in LeadingOnes, identifying early T-gates helps you make progress, but you can’t optimize each gate independently
This is the canonical “adaptivity pays” task
Its unimodal (no local minima) but non-separable (each bit cannot be trained independently)
- Adaptive strategies can flip one bit at a time, use the feedback, and find the string in O(n) queries.
- Non-adaptive strategies need exponentially many.
Its unimodal (no local minima) but non-separable (each bit cannot be trained independently)
- Adaptive strategies can flip one bit at a time, use the feedback, and find the string in O(n) queries.
- Non-adaptive strategies need exponentially many.
October 2, 2025 at 1:23 PM
This is the canonical “adaptivity pays” task
Its unimodal (no local minima) but non-separable (each bit cannot be trained independently)
- Adaptive strategies can flip one bit at a time, use the feedback, and find the string in O(n) queries.
- Non-adaptive strategies need exponentially many.
Its unimodal (no local minima) but non-separable (each bit cannot be trained independently)
- Adaptive strategies can flip one bit at a time, use the feedback, and find the string in O(n) queries.
- Non-adaptive strategies need exponentially many.
Our task is a quantum twist on the classic LeadingOnes-OneMax problem.
In this problem you're trying to learn a hidden bitstring.
Your score = how many leading bits match the target before the first mismatch.
So 1110… matches 1101 better than 1011… even if they have the same Hamming weight.
In this problem you're trying to learn a hidden bitstring.
Your score = how many leading bits match the target before the first mismatch.
So 1110… matches 1101 better than 1011… even if they have the same Hamming weight.
October 2, 2025 at 1:23 PM
Our task is a quantum twist on the classic LeadingOnes-OneMax problem.
In this problem you're trying to learn a hidden bitstring.
Your score = how many leading bits match the target before the first mismatch.
So 1110… matches 1101 better than 1011… even if they have the same Hamming weight.
In this problem you're trying to learn a hidden bitstring.
Your score = how many leading bits match the target before the first mismatch.
So 1110… matches 1101 better than 1011… even if they have the same Hamming weight.
@joachimfavre.bsky.social also has a bunch of cool projects being written up - and is currently looking for a PhD - so I recommend following him :)
August 19, 2025 at 1:58 AM
@joachimfavre.bsky.social also has a bunch of cool projects being written up - and is currently looking for a PhD - so I recommend following him :)
thanks! very helpful - will add.
July 24, 2025 at 1:44 PM
thanks! very helpful - will add.