Blas Kolic
@blas-ko.bsky.social
Complex systems, Networks, Computational Social Science, Machine Learning
Postdoc at uc3m-IBiDat, Madrid
https://blas-ko.github.io/
Postdoc at uc3m-IBiDat, Madrid
https://blas-ko.github.io/
While limited to one ABM, this work fills a critical gap in the ABM calibration literature, providing the first structured comparison of DA and LBI for latent state inference.
Kudos to Marco, Corrado, and Gianmarco for such a wonderful collaboration!
Hope you enjoy it
👉 arxiv.org/abs/2509.17625
Kudos to Marco, Corrado, and Gianmarco for such a wonderful collaboration!
Hope you enjoy it
👉 arxiv.org/abs/2509.17625
Comparing Data Assimilation and Likelihood-Based Inference on Latent State Estimation in Agent-Based Models
In this paper, we present the first systematic comparison of Data Assimilation (DA) and Likelihood-Based Inference (LBI) in the context of Agent-Based Models (ABMs). These models generate observable t...
arxiv.org
October 6, 2025 at 7:53 AM
While limited to one ABM, this work fills a critical gap in the ABM calibration literature, providing the first structured comparison of DA and LBI for latent state inference.
Kudos to Marco, Corrado, and Gianmarco for such a wonderful collaboration!
Hope you enjoy it
👉 arxiv.org/abs/2509.17625
Kudos to Marco, Corrado, and Gianmarco for such a wonderful collaboration!
Hope you enjoy it
👉 arxiv.org/abs/2509.17625
⚖️ Essentially:
➖ DA: Great for macro-level patterns. Easy to apply, doesn’t need a formal likelihood.
➖LBI: Superior for micro-level accuracy, but needs explicit likelihoods (often hard to derive).
➡️ Trade-off between generality and precision.
➖ DA: Great for macro-level patterns. Easy to apply, doesn’t need a formal likelihood.
➖LBI: Superior for micro-level accuracy, but needs explicit likelihoods (often hard to derive).
➡️ Trade-off between generality and precision.
October 6, 2025 at 7:53 AM
⚖️ Essentially:
➖ DA: Great for macro-level patterns. Easy to apply, doesn’t need a formal likelihood.
➖LBI: Superior for micro-level accuracy, but needs explicit likelihoods (often hard to derive).
➡️ Trade-off between generality and precision.
➖ DA: Great for macro-level patterns. Easy to apply, doesn’t need a formal likelihood.
➖LBI: Superior for micro-level accuracy, but needs explicit likelihoods (often hard to derive).
➡️ Trade-off between generality and precision.
📊 Main results:
➖ At the agent level, LBI outperforms DA in reconstructing latent opinions. LBI is more accurate and robust to model errors.
➖ At the aggregate level, both methods perform similarly well → DA remains competitive for forecasting population-level trends.
➖ At the agent level, LBI outperforms DA in reconstructing latent opinions. LBI is more accurate and robust to model errors.
➖ At the aggregate level, both methods perform similarly well → DA remains competitive for forecasting population-level trends.
October 6, 2025 at 7:53 AM
📊 Main results:
➖ At the agent level, LBI outperforms DA in reconstructing latent opinions. LBI is more accurate and robust to model errors.
➖ At the aggregate level, both methods perform similarly well → DA remains competitive for forecasting population-level trends.
➖ At the agent level, LBI outperforms DA in reconstructing latent opinions. LBI is more accurate and robust to model errors.
➖ At the aggregate level, both methods perform similarly well → DA remains competitive for forecasting population-level trends.
We test this using the Bounded-Confidence Model of opinion dynamics, where agents interact only if their opinions are sufficiently close, resulting in nonlinear updates.
⚙️ Scenarios:
➖ Observed: agent interactions
➖ Latent: agent opinions
➖ Noisy opinions
➖ Mis-specified model parameters
⚙️ Scenarios:
➖ Observed: agent interactions
➖ Latent: agent opinions
➖ Noisy opinions
➖ Mis-specified model parameters
October 6, 2025 at 7:53 AM
We test this using the Bounded-Confidence Model of opinion dynamics, where agents interact only if their opinions are sufficiently close, resulting in nonlinear updates.
⚙️ Scenarios:
➖ Observed: agent interactions
➖ Latent: agent opinions
➖ Noisy opinions
➖ Mis-specified model parameters
⚙️ Scenarios:
➖ Observed: agent interactions
➖ Latent: agent opinions
➖ Noisy opinions
➖ Mis-specified model parameters
Can we recover the latent agent states (e.g., opinions) from observed data in an ABM?
🆎 First systematic comparison between:
➖ Data Assimilation (DA) → Approximate, model-agnostic
➖ Likelihood-Based Inference (LBI) → Precise, but model-specific
🆎 First systematic comparison between:
➖ Data Assimilation (DA) → Approximate, model-agnostic
➖ Likelihood-Based Inference (LBI) → Precise, but model-specific
October 6, 2025 at 7:53 AM
Can we recover the latent agent states (e.g., opinions) from observed data in an ABM?
🆎 First systematic comparison between:
➖ Data Assimilation (DA) → Approximate, model-agnostic
➖ Likelihood-Based Inference (LBI) → Precise, but model-specific
🆎 First systematic comparison between:
➖ Data Assimilation (DA) → Approximate, model-agnostic
➖ Likelihood-Based Inference (LBI) → Precise, but model-specific
Amazing collab with Fabián Aguirre-Lopez and the data science crew at Sinnia, Mexico.
📄 Journal: doi.org/10.1093/comn...
📝 ArXiv (OA): arxiv.org/abs/2206.14501
💻 Code & plots: github.com/blas-ko/Twit...
📄 Journal: doi.org/10.1093/comn...
📝 ArXiv (OA): arxiv.org/abs/2206.14501
💻 Code & plots: github.com/blas-ko/Twit...
Validate User
doi.org
September 22, 2025 at 7:25 PM
Amazing collab with Fabián Aguirre-Lopez and the data science crew at Sinnia, Mexico.
📄 Journal: doi.org/10.1093/comn...
📝 ArXiv (OA): arxiv.org/abs/2206.14501
💻 Code & plots: github.com/blas-ko/Twit...
📄 Journal: doi.org/10.1093/comn...
📝 ArXiv (OA): arxiv.org/abs/2206.14501
💻 Code & plots: github.com/blas-ko/Twit...
very nasty, indeed
September 11, 2025 at 11:27 AM
very nasty, indeed