Infinite-Resolution Integral Noise Warping
for Diffusion Models

ICLR 2025

Yitong Deng1,2, Winnie Lin1, Lingxiao Li1, Dmitriy Smirnov1, Ryan Burgert3,4, Ning Yu3, Vincent Dedun1, and Mohammad H. Taghavi1

1 Netflix; 2 Stanford University; 3 Netflix Eyeline Studios; 4 Stony Brook University

Teaser Video
Abstract

Adapting pretrained image-based diffusion models to generate temporally consistent videos has become an impactful generative modeling research direction. Training-free noise-space manipulation has proven to be an effective technique, where the challenge is to preserve the Gaussian white noise distribution while adding in temporal consistency. Recently, Chang et al. (2024) formulated this problem using an integral noise representation with distribution-preserving guarantees, and proposed an upsampling-based algorithm to compute it. However, while their mathematical formulation is advantageous, the algorithm incurs a high computational cost. Through analyzing the limiting-case behavior of their algorithm as the upsampling resolution goes to infinity, we develop an alternative algorithm that, by gathering increments of multiple Brownian bridges, achieves their infinite-resolution accuracy while simultaneously reducing the computational cost by orders of magnitude. We prove and experimentally validate our theoretical claims, and demonstrate our method's effectiveness in real-world applications. We further show that our method can readily extend to the 3-dimensional space.

Paper

Code

Video Results
Citation
@inproceedings{
    deng2025infiniteresolution,
    title={Infinite-Resolution Integral Noise Warping for Diffusion Models},
    author={Yitong Deng and Winnie Lin and Lingxiao Li and Dmitriy Smirnov and Ryan D Burgert and Ning Yu and Vincent Dedun and Mohammad H. Taghavi},
    booktitle={The Thirteenth International Conference on Learning Representations},
    year={2025},
    url={https://openreview.net/forum?id=Y6LPWBo2HP}
}