My student Zimo/Cheng's recent work tackles this problem! A lot of recent neural "SDF" optimizers use losses that, even when perfectly minimized, still don't result in actual signed distance fields. Our loss guarantees convergence to distance when minimized.
My student Zimo/Cheng's recent work tackles this problem! A lot of recent neural "SDF" optimizers use losses that, even when perfectly minimized, still don't result in actual signed distance fields. Our loss guarantees convergence to distance when minimized.