I like 3D vision and training neural networks.
Code: https://github.com/parskatt
Weights: https://github.com/Parskatt/storage/releases/tag/roma
1. COLMAP would work pretty well (with good corresp), but baseline is rather small, and dynamics of cloud would be tricky.
2. Feed-forward methods would underestimate the size of the cloud.
Interested if this is indeed the case, or my intuitions are wrong.
1. COLMAP would work pretty well (with good corresp), but baseline is rather small, and dynamics of cloud would be tricky.
2. Feed-forward methods would underestimate the size of the cloud.
Interested if this is indeed the case, or my intuitions are wrong.
For revising after the OpenReview debacle we were not supposed to do so.
For revising after the OpenReview debacle we were not supposed to do so.
I would bet that we would beat their IMC22 numbers in that setting.
I would bet that we would beat their IMC22 numbers in that setting.
Models are distilled versions of data.
I don't think you would see DL without it, but you can of course argue otherwise.
Models are distilled versions of data.
I don't think you would see DL without it, but you can of course argue otherwise.
Pixel correspondence makes more sense than feature matching (what is a feature?).
Pixel correspondence makes more sense than feature matching (what is a feature?).