Guy Moss
gmoss13.bsky.social
Guy Moss
@gmoss13.bsky.social
PhD student at @mackelab.bsky.social - machine learning & geoscience.
Interested to learn more? Come visit our poster at #Neurips2024, or simply get in touch! Huge thanks again to @vetterj.bsky.social , Cornelius Schröder, @rdgao.bsky.social , and @jakhmack.bsky.social
(8/8)
December 10, 2024 at 2:33 AM
We apply Sourcerer to a real dataset of single-neuron recordings and the Hodgkin-Huxley model. This model is misspecified and highly nonlinear. Still, Sourcerer estimates source distributions that accurately reproduce the dataset, again achieving higher entropy “for free”!
(7/8)
December 10, 2024 at 2:32 AM
With the likelihood-free loss, Sourcerer can make use of differentiable simulators to quickly estimate source distributions, even when the data is high-dimensional, such as for time series data.
(6/8)
December 10, 2024 at 2:32 AM
Sourcerer consistently finds source distributions that reproduce the dataset with high fidelity on a collection of benchmark tasks. When we also regularize for high entropy, Sourcerer finds higher entropy source distributions at no cost to the simulation fidelity!
(5/8)
December 10, 2024 at 2:31 AM
We define a sample-based loss with an entropy-regularization term. Therefore, we have no constraints on our variational distribution, and we can optimize directly from simulations - we do not need to know or estimate the model likelihood.
(4/8)
December 10, 2024 at 2:31 AM
The problem? The source distribution is not unique! So, which source distribution should we target? We propose to target the maximum entropy source distribution. This guarantees uniqueness, and also ensures that we do not miss any feasible model parameters!
(3/8)
December 10, 2024 at 2:30 AM