"When quantum resources backfire: Non-gaussianity and symplectic coherence in noisy bosonic circuits"
We introduce a path propagation classical simulation alg for bosonic circuits
And find a funky interplay between quantum resources and noise
🧵👇
"When quantum resources backfire: Non-gaussianity and symplectic coherence in noisy bosonic circuits"
We introduce a path propagation classical simulation alg for bosonic circuits
And find a funky interplay between quantum resources and noise
🧵👇
- trainable (no exponential concentration)
- not classically surrogatable (thanks to high entanglement + magic)
- trainable (no exponential concentration)
- not classically surrogatable (thanks to high entanglement + magic)
Then numerics show adaptive hill-climbing converges efficiently
But non-adaptive approaches - blow up exponentially.
Then numerics show adaptive hill-climbing converges efficiently
But non-adaptive approaches - blow up exponentially.
The hidden string = the placement of T-gates between layers of semi-random unitaries
Goal = uncover the T gates positions
As in LeadingOnes, identifying early T-gates helps you make progress, but you can’t optimize each gate independently
The hidden string = the placement of T-gates between layers of semi-random unitaries
Goal = uncover the T gates positions
As in LeadingOnes, identifying early T-gates helps you make progress, but you can’t optimize each gate independently
In this problem you're trying to learn a hidden bitstring.
Your score = how many leading bits match the target before the first mismatch.
So 1110… matches 1101 better than 1011… even if they have the same Hamming weight.
In this problem you're trying to learn a hidden bitstring.
Your score = how many leading bits match the target before the first mismatch.
So 1110… matches 1101 better than 1011… even if they have the same Hamming weight.
Key takeaways:
- Entanglement isn’t always a roadblock: its degree can aid training
- Discrete optimization may be key to finding sweet spots between concentration & surrogation
Key takeaways:
- Entanglement isn’t always a roadblock: its degree can aid training
- Discrete optimization may be key to finding sweet spots between concentration & surrogation
In all honesty, we partially just think these are cool observations.
But also we need more quantum algorithmic primitives but coming up with them is hard!
And we hope that maybe these thermodynamic/geometric insights could help with novel alg design?
In all honesty, we partially just think these are cool observations.
But also we need more quantum algorithmic primitives but coming up with them is hard!
And we hope that maybe these thermodynamic/geometric insights could help with novel alg design?
And this enables us to design a new `fixed-point' quantum search algorithm
i.e., a Grover type algorithm that never overshoots the solution
And this enables us to design a new `fixed-point' quantum search algorithm
i.e., a Grover type algorithm that never overshoots the solution
- The imaginary time dynamics trace the shortest path between the initial and the solution states in complex projective space
- The geodesic length of ITE determines the query complexity of Grover’s algorithm
- The imaginary time dynamics trace the shortest path between the initial and the solution states in complex projective space
- The geodesic length of ITE determines the query complexity of Grover’s algorithm
And this in turn can be approximated by a product formula...
And when you do out pops Grover's algorithm!
And this in turn can be approximated by a product formula...
And when you do out pops Grover's algorithm!
And ground state problems can be solved (in the long time limit) by imaginary time evolution...
And ground state problems can be solved (in the long time limit) by imaginary time evolution...
Unstructured search can be written as ground state problem.
Then Grover's is just a product formula approximation of imaginary-time evolution
or, equivalently, a Riemannian gradient flow on SU(d)
to find this ground state.
Unstructured search can be written as ground state problem.
Then Grover's is just a product formula approximation of imaginary-time evolution
or, equivalently, a Riemannian gradient flow on SU(d)
to find this ground state.
For example, it's a bad idea to use Pauli prop for EXACT simulations. Even at small sizes, exact Pauli prop can kill your memory- it is better used for quick rough estimates
For example, it's a bad idea to use Pauli prop for EXACT simulations. Even at small sizes, exact Pauli prop can kill your memory- it is better used for quick rough estimates
PauliPropagation.jl is open source library that you can use to approximately simulate quantum circuits
We explain the nitty gritty of how these algorithms work in practise in our latest companion paper
🧵👇
PauliPropagation.jl is open source library that you can use to approximately simulate quantum circuits
We explain the nitty gritty of how these algorithms work in practise in our latest companion paper
🧵👇
We see DB-QSP as most useful to deterministically implement low order polynomials in cases where the success probability of other methods is so small as to not be worth doing.
We see DB-QSP as most useful to deterministically implement low order polynomials in cases where the success probability of other methods is so small as to not be worth doing.
Crucially our approach doesn't need any post-selection - but this comes at the expense of increased circuit depths.
Crucially our approach doesn't need any post-selection - but this comes at the expense of increased circuit depths.
These could be used to initialize your favourite quantum alg for ground state energy estimates.
These could be used to initialize your favourite quantum alg for ground state energy estimates.
Intuitively, high-length monomials are unlikely to contribute to expectation values (and tend not to flow back in unstructured circuits) and so can be dropped.
Intuitively, high-length monomials are unlikely to contribute to expectation values (and tend not to flow back in unstructured circuits) and so can be dropped.
Like Pauli propagation, the algorithm works in the Heisenberg picture but this time we back propagate Majorana monomials and then overlap them with an initial state.
Like Pauli propagation, the algorithm works in the Heisenberg picture but this time we back propagate Majorana monomials and then overlap them with an initial state.
Depending on your mood... the algorithm can be viewed either as naturally suited to compete with, or collaboratively enhance, quantum hardware simulations.
Depending on your mood... the algorithm can be viewed either as naturally suited to compete with, or collaboratively enhance, quantum hardware simulations.
The proofs get long but the intuition is borderline trivial: as quantum losses are smooth, the region around a point with some curvature must have non-vanishing gradients
The proofs get long but the intuition is borderline trivial: as quantum losses are smooth, the region around a point with some curvature must have non-vanishing gradients
We provide a general variance lower bound for patches of loss landscapes:
- for both structured and unstructured circuits
- to provide small-angle-initialization 'guarantees'
- to study the scaling of regions of attraction
We provide a general variance lower bound for patches of loss landscapes:
- for both structured and unstructured circuits
- to provide small-angle-initialization 'guarantees'
- to study the scaling of regions of attraction
Check out the early appendices if you're that way inclined.
Check out the early appendices if you're that way inclined.