FDS successfully synthesizes fine-grained details across text-to-image and inverse problem tasks.
Flow Matching models a target distribution by learning a marginal velocity field, defined as the average of sample-wise velocities connecting each sample from a simple prior to the target data. When multiple sample-wise velocities conflict at the same intermediate state, this averaged velocity can misguide samples toward low-density regions, leading to degraded generation quality. To address this issue, we propose Flow Divergence Sampler (FDS), a training-free framework by refining intermediate states before each solver step. Our key finding reveals that the severity of this misguidance is quantified by the divergence of the marginal velocity field that is readily computable during inference with a well-optimized model. FDS exploits this signal to steer states toward less ambiguous regions. As a plug-and-play framework compatible with standard solvers and off-the-shelf flow backbones, FDS consistently improves fidelity across various generation tasks including text-to-image synthesis, and inverse problems.
FDS refines intermediate states by steering trajectories away from high-divergence regions where conflicting velocities degrade generation quality.
An overview of FDS. Our framework refines intermediate states to avoid high-discrepancy regions. In standard settings, severely conflicting sample-wise velocities can drive the marginal velocity toward low-density regions, leading to degraded samples (red cross). To counteract this, our framework effectively steers the trajectory toward a reliable, low-discrepancy region (blue circle).
FDS consistently improves generation quality in various settings.
FDS impoves visual quality of text-to-image generation.
FDS improves visual quality of gaussian deblur and super resolution.
If you find our work helpful, please cite the following paper.
@misc{cha2026trainingfreerefinementflowmatching,
title={Training-Free Refinement of Flow Matching with Divergence-based Sampling},
author={Yeonwoo Cha and Jaehoon Yoo and Semin Kim and Yunseo Park and Jinhyeon Kwon and Seunghoon Hong},
year={2026},
eprint={2604.04646},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2604.04646},
}