Asst. Prof. at University of Copenhagen & Pioneer Centre for AI
formerly: PhD at ETH Zürich
#CV #ML #EO #AI4EO #SSL4EO #ML4good
🔗 langnico.github.io
langnico.github.io
Join us for the AI for Earth & Climate Sciences Workshop, part of the ELLIS UnConference (in Copenhagen 🇩🇰 on Dec 2), co-located with #EurIPS.
🕒 Submit workshop contributions by Oct 24, 2025
🔗 All info: eurips.cc/ellis
Join us for the AI for Earth & Climate Sciences Workshop, part of the ELLIS UnConference (in Copenhagen 🇩🇰 on Dec 2), co-located with #EurIPS.
🕒 Submit workshop contributions by Oct 24, 2025
🔗 All info: eurips.cc/ellis
Come join the discussion at the EurIPS workshop "REO: Advances in Representation Learning for Earth Observation"
Call for papers deadline: October 15, AoE
Workshop site: sites.google.com/view/reoeurips
@euripsconf.bsky.social @esa.int
Come join the discussion at the EurIPS workshop "REO: Advances in Representation Learning for Earth Observation"
Call for papers deadline: October 15, AoE
Workshop site: sites.google.com/view/reoeurips
@euripsconf.bsky.social @esa.int
Come join the discussion at the EurIPS workshop "REO: Advances in Representation Learning for Earth Observation"
Call for papers deadline: October 15, AoE
Workshop site: sites.google.com/view/reoeurips
@euripsconf.bsky.social @esa.int
Workshop site: sites.google.com/g.harvard.ed...
Workshop site: sites.google.com/g.harvard.ed...
Are you working at the intersection of AI and Climate or Conservation applications?
This is a great opportunity to discuss your novel and recently published research.
Call for participation: sites.google.com/g.harvard.ed...
Are you working at the intersection of AI and Climate or Conservation applications?
This is a great opportunity to discuss your novel and recently published research.
Call for participation: sites.google.com/g.harvard.ed...
This MICCAI25 challenge is still running and there is still time to participate!
Submission deadline: August 20, 2025
Join here: fomo25.github.io
Check out the thread below👇
This MICCAI25 challenge is still running and there is still time to participate!
Submission deadline: August 20, 2025
Join here: fomo25.github.io
Check out the thread below👇
We’re excited to welcome Prof. Dr. Devis Tuia from EPFL, who will join us on October 1st with his talk:
“Machine Learning for Earth: Monitoring the Pulse of Our Planet with Sensor Data, from Your Phone All the Way to Space”
We’re excited to welcome Prof. Dr. Devis Tuia from EPFL, who will join us on October 1st with his talk:
“Machine Learning for Earth: Monitoring the Pulse of Our Planet with Sensor Data, from Your Phone All the Way to Space”
We will be in room 104E.
#FGVC #CVPR2025
@fgvcworkshop.bsky.social
We will be in room 104E.
#FGVC #CVPR2025
@fgvcworkshop.bsky.social
sites.google.com/view/fgvc12
We are looking forward to a series of talks on semantic granularity, covering topics such as machine teaching, interpretability and much more!
Room 104 E
Schedule & details: sites.google.com/view/fgvc12
@cvprconference.bsky.social #CVPR25
sites.google.com/view/fgvc12
openaccess.thecvf.com/CVPR2025_wor...
Poster session:
June 11, 4pm-6pm
ExHall D, poster boards 373-403
#CVPR25 @cvprconference.bsky.social
openaccess.thecvf.com/CVPR2025_wor...
Poster session:
June 11, 4pm-6pm
ExHall D, poster boards 373-403
#CVPR25 @cvprconference.bsky.social
We are looking forward to a series of talks on semantic granularity, covering topics such as machine teaching, interpretability and much more!
Room 104 E
Schedule & details: sites.google.com/view/fgvc12
@cvprconference.bsky.social #CVPR25
We are looking forward to a series of talks on semantic granularity, covering topics such as machine teaching, interpretability and much more!
Room 104 E
Schedule & details: sites.google.com/view/fgvc12
@cvprconference.bsky.social #CVPR25
⏰ Deadline June 30
📣Notification July 11
📷Camera ready Aug 8
🏆Best paper award-$1,000
⏰ Deadline June 30
📣Notification July 11
📷Camera ready Aug 8
🏆Best paper award-$1,000
📅 4 June 2025, 14:00–15:00.
Find more information and sign up here:
www.aicentre.dk/events/talk-...
📅 4 June 2025, 14:00–15:00.
Find more information and sign up here:
www.aicentre.dk/events/talk-...
👉 climateainordics.com/newsletter/2...
👉 climateainordics.com/newsletter/2...
www.kaggle.com/competitions...
@kakanikatija.bsky.social @mbarinews.bsky.social @cvprconference.bsky.social @fgvcworkshop.bsky.social @kaggle.com
www.kaggle.com/competitions...
@kakanikatija.bsky.social @mbarinews.bsky.social @cvprconference.bsky.social @fgvcworkshop.bsky.social @kaggle.com
When: Jan 12-30, 2026
Where: SCBI @smconservation.bsky.social
When: Jan 12-30, 2026
Where: SCBI @smconservation.bsky.social
Workshop: tinyurl.com/workshop-EO-AI-stream
and...👇
Workshop: tinyurl.com/workshop-EO-AI-stream
and...👇
Excited to announce the first challenge at MICCAI focusing on the development of self-supervised pretraining of foundation models for brain MRI! 🧠
With access to a large-scale dataset, codebase, cash prices and multiple tracks.
Read more ⬇️
Excited to announce the first challenge at MICCAI focusing on the development of self-supervised pretraining of foundation models for brain MRI! 🧠
With access to a large-scale dataset, codebase, cash prices and multiple tracks.
Read more ⬇️
👉 www.kaggle.com/competitions...
@cvprconference.bsky.social @kaggle.com
#FGVC #CVPR #CVPR2025 QIM Center (qim.dk)
[1/5]
👉 www.kaggle.com/competitions...
@cvprconference.bsky.social @kaggle.com
#FGVC #CVPR #CVPR2025 QIM Center (qim.dk)
[1/5]
Representational Similarity via Interpretable Visual Concepts
arxiv.org/abs/2503.15699
We all know the ViT-Large performs better than the Resnet-50, but what visual concepts drive this difference? Our new ICLR 2025 paper addresses this question! nkondapa.github.io/rsvc-page/
Representational Similarity via Interpretable Visual Concepts
arxiv.org/abs/2503.15699
We all know the ViT-Large performs better than the Resnet-50, but what visual concepts drive this difference? Our new ICLR 2025 paper addresses this question! nkondapa.github.io/rsvc-page/
We all know the ViT-Large performs better than the Resnet-50, but what visual concepts drive this difference? Our new ICLR 2025 paper addresses this question! nkondapa.github.io/rsvc-page/