Uniform Autoencoder with Latent Flow Matching: Learning Robust Representations via Bounded Spaces
Abstract
This paper introduces the Uniform Autoencoder (UAE), a generative framework designed to learn effective representations in low-dimensional latent spaces while preserving the geometric structure of the data. Unlike traditional Variational Autoencoders (VAEs) that impose an unbounded Gaussian prior often leading to posterior collapse or topological mismatch, UAE enforces a uniform distribution within a bounded latent space. By integrating a reconstruction objective with a geometric expansion objective, the model effectively captures the intrinsic data manifold. To enable high-fidelity data generation from this bounded space, we employ Latent Flow Matching (LFM) as a post-hoc sampling mechanism to model the empirical distribution of the fixed latent representations. We evaluate the proposed framework on a diverse set of benchmarks, including synthetic manifolds (Moon, Spiral, Swiss Roll) and high-dimensional datasets (Digits, MNIST, CIFAR-10, CelebA). Our experiments demonstrate that UAE outperforms standard baselines in preserving data topology. Furthermore, the model exhibits strong discriminative capacity, achieving 93% accuracy on downstream classification tasks for Digits and MNIST, validating the effectiveness of uniform latent priors in separating distinct data classes. The proposed method offers a robust alternative for representation learning, balancing generative capability with high-quality feature extraction. Our implementation is available at https://tayfununal.github.io/Article-3/.
Original Datasets
Visualizations of the diverse datasets used in our experiments.
1. Moon Dataset.
2. Spiral Dataset.
3. SwissRoll Dataset.
4. Digits Dataset.
5. MNIST Dataset.
6. CIFAR-10 Dataset.
7. CelebA Dataset.
Dataset-Wise Visualization of Results
Visualizations of the results obtained across different datasets.
1. Moon latent.
2. Moon reconstruction.
3. Moon UAE results.
4. Moon UAE with LFM results.
2D latent space transitions for the Swiss Roll and Spiral datasets on the test set during training.
1. Spiral latent.
2. Spiral reconstruction.
3. Spiral UAE results.
4. Spiral UAE with LFM results.
1. SwissRoll latent.
2. SwissRoll reconstruction.
3. SwissRoll UAE results.
4. SwissRoll UAE with LFM results.
1. Digits latent.
2. Digits reconstruction.
3. Digits UAE results.
4. Digits UAE with LFM results.
4. Digits: Latent-space interpolation using UAE within the same class.
1. MNIST latent.
2. MNIST reconstruction.
3. MNIST UAE results.
4. MNIST UAE with LFM results.
4. MNIST: Latent-space interpolation using UAE within the same class.
2. CIFAR-10 reconstruction.
3. CIFAR-10 UAE results.
4. CIFAR-10 UAE with LFM results.
4. CIFAR-10: Latent-space interpolation using UAE within the same class.
2. CelebA reconstruction.
3. CelebA UAE results.
4. CelebA UAE with LFM results.
4. CelebA: Latent-space interpolation using UAE between image pairs.
BibTeX
@article{Unal2026uae,
title={Uniform Autoencoder with Latent Flow Matching: Learning Robust Representations via Bounded Spaces},
author={Tayfun Ünal, and Ünver Çiftçi},
journal={Social Science Research Network (SSRN)},
year={2026},
url={https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6174263}
}