DurstewitzLab
durstewitzlab.bsky.social
DurstewitzLab
@durstewitzlab.bsky.social
Scientific AI/ machine learning, dynamical systems (reconstruction), generative surrogate models of brains & behavior, applications in neuroscience & mental health
Despite being extremely lightweight (only 0.1% of params, 0.6% training corpus size, of closest competitor), it also outperforms major TS foundation models like Chronos variants on real-world TS forecasting with minimal inference times (0.2%) ...
September 21, 2025 at 9:40 AM
Yes I think so!
July 4, 2025 at 5:50 AM
Fantastic work by Florian Bähner, Hazem Toutounji, Tzvetan Popov and many others - I'm just the person advertising!
June 26, 2025 at 3:30 PM
Reposted by DurstewitzLab
What a line up!! With Lorenzo Gaetano Amato, Demian Battaglia, @durstewitzlab.bsky.social, @engeltatiana.bsky.social,‪ @seanfw.bsky.social‬, Matthieu Gilson, Maurizio Mattia, @leonardopollina.bsky.social‬, Sara Solla.
June 21, 2025 at 10:24 AM
We dive a bit into the reasons why current time series FMs not trained for DS reconstruction fail, and conclude that a DS perspective on time series forecasting & models may help to advance the #TimeSeriesAnalysis field.

(6/6)
May 20, 2025 at 2:15 PM
Remarkably, it not only generalizes zero-shot to novel DS, but it can even generalize to new initial conditions and regions of state space not covered by the in-context information.

(5/6)
May 20, 2025 at 2:15 PM
And no, it’s neither based on Transformers nor Mamba – it’s a new type of mixture-of-experts architecture based on the recently introduced AL-RNN (proceedings.neurips.cc/paper_files/...), specifically trained for DS reconstruction.
#AI

(4/6)
May 20, 2025 at 2:15 PM
It often even outperforms TS FMs on forecasting diverse empirical time series, like weather, traffic, or medical data, typically used to train TS FMs.

This is surprising, cos DynaMix’ training corpus consists *solely* of simulated limit cycles & chaotic systems, no empirical data at all!

(3/6)
May 20, 2025 at 2:15 PM
Unlike TS FMs, DynaMix exhibits #ZeroShotLearning of long-term stats of unseen DS, incl. attractor geometry & power spectrum, w/o *any* re-training, just from a context signal.

It does so with only 0.1% of the parameters of Chronos & 10x faster inference times than the closest competitor.

(2/6)
May 20, 2025 at 2:15 PM