Francisco Mena
banner
fmenat.bsky.social
Francisco Mena
@fmenat.bsky.social
Researcher @gfz.bsky.social & PhD candidate @rptu.bsky.social 🇩🇪
> MSc @UTFSM 🇨🇱 | ex. Researcher @dfki.bsky.social 🇩🇪 & visitor @Inria 🇫🇷

Enjoying research in AI & ML 🤖 | Now, into #AI4EO 🛰️
Instead of trying all possible combinations, the search could be reduced to a 2-step sequential search: 1) search for the best encoder architecture with early/input fusion, and then 2) with the encoder selected in (1), search for the best fusion strategy
September 11, 2025 at 2:04 PM
When considering all the diverse encoder architectures (like convolutional or attention-based) and fusion strategies (like input and feature) from the literature, the search space of all possible model combinations is considerably big and a resource-wasting process.
September 11, 2025 at 2:04 PM
We show that our multi-sensor approach is more robust in average than recent methods from the EO literature in three classification tasks, namely cropland classification, crop-type classification, and tree-species classification.

@interdonatos.bsky.social
May 13, 2025 at 11:37 AM
Concretely, we use a mix of sensor dropout as data augmentation and mutual distillation to enhance collaborative learning across sensors, namely DSensD+. We leverage multi-task learning to combine various objectives to achieve an optimal robustness
May 13, 2025 at 11:37 AM
Did you know that mutual distillation can be used to make deep learning models robust to missing sensor data?
We present this in our recent paper from a collaboration between @dfki.bsky.social and Inria (evergreen team). Available at @ieeeaccess.bsky.social 🔓

ieeexplore.ieee.org/document/10994…
May 13, 2025 at 11:37 AM
We show that simulating all possible Combinations of Missing (CoM) views during training allows the models to be aware of potential missing data during inference.

This translates into a generalization to missing view scenarios (robustness increase) and improving regular performance in some cases
April 11, 2025 at 7:21 AM
How will your multi-view model perform if data is missing during deployment?
Usually, this translates into a considerable decline in accuracy.

However, simpler approaches in model design can make it robust to missing data. We address this in our recent paper in the Neurocomputing journal.
April 11, 2025 at 7:21 AM
By analyzing the data-driven fusion weights we found interesting crop-country dependencies. Take a look if you are interested 😁
December 16, 2024 at 7:06 AM
More than a year ago we started with the idea of using an adaptive multi-modal fusion model 🤖 for crop yield prediction 🌽🌿. This is because each source of information might provide better crop-related information than others depending on each case (region/time/data quality) 🤓
December 16, 2024 at 7:06 AM