#Autoencoders
Kunjing Yang, Zhiwei Wang, Minru Bai
MAUGIF: Mechanism-Aware Unsupervised General Image Fusion via Dual Cross-Image Autoencoders
https://arxiv.org/abs/2511.08272
November 12, 2025 at 7:53 AM
Kunjing Yang, Zhiwei Wang, Minru Bai: MAUGIF: Mechanism-Aware Unsupervised General Image Fusion via Dual Cross-Image Autoencoders https://arxiv.org/abs/2511.08272 https://arxiv.org/pdf/2511.08272 https://arxiv.org/html/2511.08272
November 12, 2025 at 6:31 AM
Minsuk Jang, Hyeonseo Jeong, Minseok Son, Changick Kim
CINEMAE: Leveraging Frozen Masked Autoencoders for Cross-Generator AI Image Detection
https://arxiv.org/abs/2511.06325
November 11, 2025 at 1:13 PM
Usha Bhalla, Alex Oesterling, Claudio Mayrink Verdun, Himabindu Lakkaraju, Flavio P. Calmon
Temporal Sparse Autoencoders: Leveraging the Sequential Nature of Language for Interpretability
https://arxiv.org/abs/2511.05541
November 11, 2025 at 12:32 PM
The #CDSM2025 is coming up tomorrow: www.causalscience.org. Mara Mattes will present our joint work with Jannis Kueck on learning and testing the structure of interference effects in social networks - how the treatment of others affects one’s own outcomes - using graph convolutional autoencoders.
Causal Data Science Meeting - Home
Fostering a dialogue between industry and academia on causal data science.
www.causalscience.org
November 11, 2025 at 12:00 PM
Xinyuan Yan, Shusen Liu, Kowshik Thopalli, Bei Wang
Visual Exploration of Feature Relationships in Sparse Autoencoders with Curated Concepts
https://arxiv.org/abs/2511.06048
November 11, 2025 at 11:48 AM
Erel Naor, Ofir Lindenbaum: Hybrid Autoencoders for Tabular Data: Leveraging Model-Based Augmentation in Low-Label Settings https://arxiv.org/abs/2511.06961 https://arxiv.org/pdf/2511.06961 https://arxiv.org/html/2511.06961
November 11, 2025 at 6:35 AM
Minsuk Jang, Hyeonseo Jeong, Minseok Son, Changick Kim: CINEMAE: Leveraging Frozen Masked Autoencoders for Cross-Generator AI Image Detection https://arxiv.org/abs/2511.06325 https://arxiv.org/pdf/2511.06325 https://arxiv.org/html/2511.06325
November 11, 2025 at 6:31 AM
Xinyuan Yan, Shusen Liu, Kowshik Thopalli, Bei Wang: Visual Exploration of Feature Relationships in Sparse Autoencoders with Curated Concepts https://arxiv.org/abs/2511.06048 https://arxiv.org/pdf/2511.06048 https://arxiv.org/html/2511.06048
November 11, 2025 at 6:30 AM
Usha Bhalla, Alex Oesterling, Claudio Mayrink Verdun, Himabindu Lakkaraju, Flavio P. Calmon: Temporal Sparse Autoencoders: Leveraging the Sequential Nature of Language for Interpretability https://arxiv.org/abs/2511.05541 https://arxiv.org/pdf/2511.05541 https://arxiv.org/html/2511.05541
November 11, 2025 at 6:29 AM
Air Pollution Forecasting Using Autoencoders: A Classification-Based Prediction of NO2, PM10, and SO2 Concentrations

www.mdpi.com/2504-3129/6/...
www.mdpi.com
November 11, 2025 at 6:08 AM
Unsupervised Discovery of High-Redshift Galaxy Populations with Variational Autoencoders. Aayush Saxena https://arxiv.org/abs/2511.05439
November 11, 2025 at 1:19 AM
How does an autoencoder work? 🧐

Autoencoders are artificial neural networks. They are very often used in anomaly detection, dimension reduction, ...

🔄 Repost to help others find this post!
November 10, 2025 at 5:43 PM
Noise-augmented autoencoders with perceptual losses yield encodings structured according to perceptual hierarchy; perceptually salient info is captured in coarser representation structures.
Perceptually Aligning Representations of Music via Noise-Augmented Autoencoders
Mathias Rose Bjare, Giorgia Cantisani, Marco Pasini, Stefan Lattner, Gerhard Widmer
arxiv.org
November 10, 2025 at 10:37 AM
Aayush Saxena: Unsupervised Discovery of High-Redshift Galaxy Populations with Variational Autoencoders https://arxiv.org/abs/2511.05439 https://arxiv.org/pdf/2511.05439 https://arxiv.org/html/2511.05439
November 10, 2025 at 6:42 AM
Mathias Rose Bjare, Giorgia Cantisani, Marco Pasini, Stefan Lattner, Gerhard Widmer: Perceptually Aligning Representations of Music via Noise-Augmented Autoencoders https://arxiv.org/abs/2511.05350 https://arxiv.org/pdf/2511.05350 https://arxiv.org/html/2511.05350
November 10, 2025 at 6:34 AM
Unsupervised Discovery of High-Redshift Galaxy Populations with Variational Autoencoders
https://arxiv.org/pdf/2511.05439
Aayush Saxena.
https://arxiv.org/abs/2511.05439
arXiv abstract link
arxiv.org
November 10, 2025 at 4:31 AM
[20/30] 183 Likes, 14 Comments, 3 Posts
2510.21890, cs․LG | cs․AI | cs․GR, 24 Oct 2025

🆕The Principles of Diffusion Models

Chieh-Hsin Lai, Yang Song, Dongjun Kim, Yuki Mitsufuji, Stefano Ermon
November 10, 2025 at 12:06 AM
Learning Earthquake Sources Using Symmetric Autoencoders #BSSA

Do deep earthquakes release their energy at once or in bursts? ⚒️

pubs.geoscienceworld.org/ssa/bssa/art...
November 8, 2025 at 3:01 PM
NYU’s new AI architecture makes high-quality image generation faster and cheaper

New York University researchers have developed a novel diffusion model architecture called "Diffusion Transformer with Representation Autoencoders" (RAE). This RAE app…

Telegram AI Digest
#airesearch #transformer #vae
NYU’s new AI architecture makes high-quality image generation faster and cheaper
New York University researchers have developed a novel diffusion model architecture called "Diffusion Transformer with Representation Autoencoders" (RAE). This RAE approach improves image generation by enhancing semantic understanding within diffusion models. It challenges the long-standing practice of using standard variational autoencoders (SD-VAEs) that primarily capture low-level image features. The RAE model integrates powerful, pre-trained representation encoders with a trained vision transformer decoder. This integration leverages recent advancements in representation learning, a field previously thought incompatible with image generation due to its focus on semantics. By using existing, sophisticated encoders, the RAE model simplifies training and achieves superior performance without added architectural complexity. Researchers found that higher-dimensional representations, previously deemed problematic, actually offer advantages like richness and faster convergence. The RAE architecture demonstrates significant gains in training efficiency, boasting a remarkable 47x speedup compared to older models. This leads to more reliable and semantically accurate image generation, crucial for enterprise applications. The model achieved state-of-the-art results on the ImageNet benchmark, indicating higher quality generated images. Ultimately, RAE paves the way for more capable, cost-effective, and unified generative AI systems.
venturebeat.com
November 8, 2025 at 7:03 AM
Second's day concluded by fantastic talk by Cristina Martin Linares on "Minimal reconstruction of SpliceAI using distilled matryoshka sparse autoencoders"

They showed that matryoshka SAEs arxiv.org/abs/2503.17547 improves upon openSpliceAI elifesciences.org/reviewed-preprints/107454. #GI2025
November 7, 2025 at 1:08 PM
A study reveals that Matching Pursuit Sparse Autoencoders (MP-SAE) improve the extraction of correlated features over traditional shallow architectures, enhancing interpretability and feature learning in complex data representations. https://arxiv.org/abs/2506.05239
Evaluating Sparse Autoencoders: From Shallow Design to Matching Pursuit
ArXiv link for Evaluating Sparse Autoencoders: From Shallow Design to Matching Pursuit
arxiv.org
November 6, 2025 at 9:31 PM
New research presents MP-SAE, enhancing Sparse Autoencoders via a residual-guided Matching Pursuit method. It captures complex hierarchical and nonlinear features in neural representations, challenging linear assumptions and revealing multimodal interactions. https://arxiv.org/abs/2506.03093
From Flat to Hierarchical: Extracting Sparse Representations with Matching Pursuit
ArXiv link for From Flat to Hierarchical: Extracting Sparse Representations with Matching Pursuit
arxiv.org
November 6, 2025 at 7:41 AM
Technical sophistication: Research on sparse autoencoders, post-hoc attribution, multimodal reasoning failures, federation for generalizability. The work is moving toward explainability and auditability. Transparency tech advancing in parallel with capability.
November 6, 2025 at 4:48 AM