Francisco Mena
@fmenat.bsky.social
Researcher @gfz.bsky.social & PhD candidate @rptu.bsky.social 🇩🇪
> MSc @UTFSM 🇨🇱 | ex. Researcher @dfki.bsky.social 🇩🇪 & visitor @Inria 🇫🇷
Enjoying research in AI & ML 🤖 | Now, into #AI4EO 🛰️
> MSc @UTFSM 🇨🇱 | ex. Researcher @dfki.bsky.social 🇩🇪 & visitor @Inria 🇫🇷
Enjoying research in AI & ML 🤖 | Now, into #AI4EO 🛰️
Considering the current substantial use of computational resources in deep learning research and its consequential impact on the carbon footprint 👣, it is important to look for systematic ways that lead us to reduce computational efforts
September 11, 2025 at 2:04 PM
Considering the current substantial use of computational resources in deep learning research and its consequential impact on the carbon footprint 👣, it is important to look for systematic ways that lead us to reduce computational efforts
Instead of trying all possible combinations, the search could be reduced to a 2-step sequential search: 1) search for the best encoder architecture with early/input fusion, and then 2) with the encoder selected in (1), search for the best fusion strategy
September 11, 2025 at 2:04 PM
Instead of trying all possible combinations, the search could be reduced to a 2-step sequential search: 1) search for the best encoder architecture with early/input fusion, and then 2) with the encoder selected in (1), search for the best fusion strategy
When considering all the diverse encoder architectures (like convolutional or attention-based) and fusion strategies (like input and feature) from the literature, the search space of all possible model combinations is considerably big and a resource-wasting process.
September 11, 2025 at 2:04 PM
When considering all the diverse encoder architectures (like convolutional or attention-based) and fusion strategies (like input and feature) from the literature, the search space of all possible model combinations is considerably big and a resource-wasting process.
Reposted by Francisco Mena
Also, don't hesitate to visit our CCS in Probabilistic Machine Learning for Earth Observation (TU2.M1)!
⏲️ Tuesday, 5 August, 10:30 - 11:45
⏲️ Tuesday, 5 August, 10:30 - 11:45
August 1, 2025 at 2:31 PM
Also, don't hesitate to visit our CCS in Probabilistic Machine Learning for Earth Observation (TU2.M1)!
⏲️ Tuesday, 5 August, 10:30 - 11:45
⏲️ Tuesday, 5 August, 10:30 - 11:45
Also, don't hesitate to visit our CCS in Probabilistic Machine Learning for Earth Observation (TU2.M1)!
⏲️ Tuesday, 5 August, 10:30 - 11:45
⏲️ Tuesday, 5 August, 10:30 - 11:45
August 1, 2025 at 2:31 PM
Also, don't hesitate to visit our CCS in Probabilistic Machine Learning for Earth Observation (TU2.M1)!
⏲️ Tuesday, 5 August, 10:30 - 11:45
⏲️ Tuesday, 5 August, 10:30 - 11:45
⏲️ Thursday, 7 August, 15:45 - 17:00
📜 On What Depends the Robustness of Multi-source Models to Missing Data in Earth Observation? in the TH4.P11: Multi-source Semantic Segmentation (oral 🎤)
⭐I'll present our findings about three major factors that drive the robustness to missing data sources.
📜 On What Depends the Robustness of Multi-source Models to Missing Data in Earth Observation? in the TH4.P11: Multi-source Semantic Segmentation (oral 🎤)
⭐I'll present our findings about three major factors that drive the robustness to missing data sources.
August 1, 2025 at 2:31 PM
⏲️ Thursday, 7 August, 15:45 - 17:00
📜 On What Depends the Robustness of Multi-source Models to Missing Data in Earth Observation? in the TH4.P11: Multi-source Semantic Segmentation (oral 🎤)
⭐I'll present our findings about three major factors that drive the robustness to missing data sources.
📜 On What Depends the Robustness of Multi-source Models to Missing Data in Earth Observation? in the TH4.P11: Multi-source Semantic Segmentation (oral 🎤)
⭐I'll present our findings about three major factors that drive the robustness to missing data sources.
⏲️ Tuesday, 5 August, 09:15 - 10:30
📜 A Multi-modal Co-learning Model with Shared and Specific Features for Land-cover Classification in the TUP1.PB: Cross-Domain Learning and Semantic Segmentation in RS (poster🖼️)
⭐ Here we leverage co-learning and multiple losses to improve single-modality inference
📜 A Multi-modal Co-learning Model with Shared and Specific Features for Land-cover Classification in the TUP1.PB: Cross-Domain Learning and Semantic Segmentation in RS (poster🖼️)
⭐ Here we leverage co-learning and multiple losses to improve single-modality inference
August 1, 2025 at 2:31 PM
⏲️ Tuesday, 5 August, 09:15 - 10:30
📜 A Multi-modal Co-learning Model with Shared and Specific Features for Land-cover Classification in the TUP1.PB: Cross-Domain Learning and Semantic Segmentation in RS (poster🖼️)
⭐ Here we leverage co-learning and multiple losses to improve single-modality inference
📜 A Multi-modal Co-learning Model with Shared and Specific Features for Land-cover Classification in the TUP1.PB: Cross-Domain Learning and Semantic Segmentation in RS (poster🖼️)
⭐ Here we leverage co-learning and multiple losses to improve single-modality inference
The code is available at github.com/fmenat/DSensDp
GitHub - fmenat/DSensDp: Public repository of our research work at IEEE Access
Public repository of our research work at IEEE Access - fmenat/DSensDp
github.com
May 13, 2025 at 1:03 PM
The code is available at github.com/fmenat/DSensDp
We show that our multi-sensor approach is more robust in average than recent methods from the EO literature in three classification tasks, namely cropland classification, crop-type classification, and tree-species classification.
@interdonatos.bsky.social
@interdonatos.bsky.social
May 13, 2025 at 11:37 AM
We show that our multi-sensor approach is more robust in average than recent methods from the EO literature in three classification tasks, namely cropland classification, crop-type classification, and tree-species classification.
@interdonatos.bsky.social
@interdonatos.bsky.social
Concretely, we use a mix of sensor dropout as data augmentation and mutual distillation to enhance collaborative learning across sensors, namely DSensD+. We leverage multi-task learning to combine various objectives to achieve an optimal robustness
May 13, 2025 at 11:37 AM
Concretely, we use a mix of sensor dropout as data augmentation and mutual distillation to enhance collaborative learning across sensors, namely DSensD+. We leverage multi-task learning to combine various objectives to achieve an optimal robustness