Daniel Krentzel
@danielkrentzel.bsky.social
Hunting for new antibiotics with deep learning at @pasteur.fr 🧫 | Previously @crick.ac.uk and 🎓 @imperialcollegeldn.bsky.social
📊🎯 We evaluated the performance of CLEM-Reg against an expert electron microscopist using lysosomes as target structures and found that CLEM-Reg aligns lysosomes with submicron accuracy (7/9)
September 10, 2025 at 9:37 AM
📊🎯 We evaluated the performance of CLEM-Reg against an expert electron microscopist using lysosomes as target structures and found that CLEM-Reg aligns lysosomes with submicron accuracy (7/9)
3️⃣ Finallly, the fluorescence image is overlaid to the electron microscopy image with the transform found from the point cloud registration. (6/9)
September 10, 2025 at 9:37 AM
3️⃣ Finallly, the fluorescence image is overlaid to the electron microscopy image with the transform found from the point cloud registration. (6/9)
2️⃣ Then, we sample point clouds from the segmentations to obtain a modality-agnostic and lightweight representation. The two resulting point clouds are then registered using a probabilistic point cloud registration algorithm. (5/9)
September 10, 2025 at 9:37 AM
2️⃣ Then, we sample point clouds from the segmentations to obtain a modality-agnostic and lightweight representation. The two resulting point clouds are then registered using a probabilistic point cloud registration algorithm. (5/9)
1️⃣ CLEM-Reg starts off by segmenting mitochondria in both the fluorescence and the electron microscopy images. Mitochondria work well, because they're usually very well distributed across the cell. (4/9)
September 10, 2025 at 9:37 AM
1️⃣ CLEM-Reg starts off by segmenting mitochondria in both the fluorescence and the electron microscopy images. Mitochondria work well, because they're usually very well distributed across the cell. (4/9)
And this is why we developed CLEM-Reg, an algorithm that automatically and accurately registers vCLEM images in minutes. (3/9)
September 10, 2025 at 9:37 AM
And this is why we developed CLEM-Reg, an algorithm that automatically and accurately registers vCLEM images in minutes. (3/9)
For most image registration tasks, intensity-based methods work well. Correlative light and electron microscopy data are, however, a lot trickier to align, because of the massive differences in visual appearance. As a result, CLEM data is usually registered manually. (2/9)
September 10, 2025 at 9:37 AM
For most image registration tasks, intensity-based methods work well. Correlative light and electron microscopy data are, however, a lot trickier to align, because of the massive differences in visual appearance. As a result, CLEM data is usually registered manually. (2/9)
And the fifth and most advanced challenge was to implement a measure regionprops plugin with PyQt
📚 github.com/Image-Analys...
📚 github.com/Image-Analys...
May 14, 2025 at 5:44 AM
And the fifth and most advanced challenge was to implement a measure regionprops plugin with PyQt
📚 github.com/Image-Analys...
📚 github.com/Image-Analys...
For the fourth challenge, the group had to create a cell tracking plugin
📚 github.com/Image-Analys...
📚 github.com/Image-Analys...
May 14, 2025 at 5:44 AM
For the fourth challenge, the group had to create a cell tracking plugin
📚 github.com/Image-Analys...
📚 github.com/Image-Analys...
May 14, 2025 at 5:44 AM
May 14, 2025 at 5:44 AM
The first group developed a plugin to measure region properties from a segmented image
📚 github.com/Image-Analys...
📚 github.com/Image-Analys...
May 14, 2025 at 5:44 AM
The first group developed a plugin to measure region properties from a segmented image
📚 github.com/Image-Analys...
📚 github.com/Image-Analys...
Indeed, the participants formed groups and implemented @napari.org plugins from scratch for tasks like pixel classification or cell tracking.
At the end, the groups did a live demo of their plugin 💻
I was super impressed! 🥳
#NEUBIASPasteur2025
📚 Course materials: github.com/Image-Analys...
At the end, the groups did a live demo of their plugin 💻
I was super impressed! 🥳
#NEUBIASPasteur2025
📚 Course materials: github.com/Image-Analys...
May 13, 2025 at 10:33 AM
Indeed, the participants formed groups and implemented @napari.org plugins from scratch for tasks like pixel classification or cell tracking.
At the end, the groups did a live demo of their plugin 💻
I was super impressed! 🥳
#NEUBIASPasteur2025
📚 Course materials: github.com/Image-Analys...
At the end, the groups did a live demo of their plugin 💻
I was super impressed! 🥳
#NEUBIASPasteur2025
📚 Course materials: github.com/Image-Analys...
Finally, we show that the latent representations of our deep learning model can be used to robustly detect if bacteria were exposed to drugs with novel modes of action. (6/6)
April 1, 2025 at 9:21 AM
Finally, we show that the latent representations of our deep learning model can be used to robustly detect if bacteria were exposed to drugs with novel modes of action. (6/6)
Then, we explored if our approach could assign the mode of action of a previously unseen novel compound by conducting a leave-one-compound-out experiment. (5/6)
April 1, 2025 at 9:11 AM
Then, we explored if our approach could assign the mode of action of a previously unseen novel compound by conducting a leave-one-compound-out experiment. (5/6)
We also investigated the latent representations of our deep learning model and found that we could use them to detect if bacteria had been exposed to antibiotics, even well below sub-inhibitory drug concentrations. (4/6)
April 1, 2025 at 9:00 AM
We also investigated the latent representations of our deep learning model and found that we could use them to detect if bacteria had been exposed to antibiotics, even well below sub-inhibitory drug concentrations. (4/6)
We then wondered if all imaging channels were necessary to distinguish between modes of action. To our surprise, we found that brightfield images alone were sufficient to achieve excellent classification accuracies. (3/6)
April 1, 2025 at 8:48 AM
We then wondered if all imaging channels were necessary to distinguish between modes of action. To our surprise, we found that brightfield images alone were sufficient to achieve excellent classification accuracies. (3/6)
We found that our deep learning model could robustly recognise what compounds bacteria had been exposed to, as well as their mode of action directly from images. (2/6)
April 1, 2025 at 8:43 AM
We found that our deep learning model could robustly recognise what compounds bacteria had been exposed to, as well as their mode of action directly from images. (2/6)
We first acquired a high-throughput imaging dataset of E. coli bacteria exposed to reference antibiotics and trained a convolutional net to distinguish between treatment conditions. (1/6)
April 1, 2025 at 8:38 AM
We first acquired a high-throughput imaging dataset of E. coli bacteria exposed to reference antibiotics and trained a convolutional net to distinguish between treatment conditions. (1/6)