Papers
arxiv:2406.17636

Aligning Diffusion Models with Noise-Conditioned Perception

Published on Jun 25
· Submitted by alexgambashidze on Jun 26
Authors:

Abstract

Recent advancements in human preference optimization, initially developed for Language Models (LMs), have shown promise for text-to-image Diffusion Models, enhancing prompt alignment, visual appeal, and user preference. Unlike LMs, Diffusion Models typically optimize in pixel or VAE space, which does not align well with human perception, leading to slower and less efficient training during the preference alignment stage. We propose using a perceptual objective in the U-Net embedding space of the diffusion model to address these issues. Our approach involves fine-tuning Stable Diffusion 1.5 and XL using Direct Preference Optimization (DPO), Contrastive Preference Optimization (CPO), and supervised fine-tuning (SFT) within this embedding space. This method significantly outperforms standard latent-space implementations across various metrics, including quality and computational cost. For SDXL, our approach provides 60.8\% general preference, 62.2\% visual appeal, and 52.1\% prompt following against original open-sourced SDXL-DPO on the PartiPrompts dataset, while significantly reducing compute. Our approach not only improves the efficiency and quality of human preference alignment for diffusion models but is also easily integrable with other optimization techniques. The training code and LoRA weights will be available here: https://huggingface.co/alexgambashidze/SDXL\_NCP-DPO\_v0.1

Community

Paper author Paper submitter

Aligning diffusion models in pixel and latent space is not optimal. We significantly improve DPO for diffusion models with a simple perceptual trick. Our method outperforms original Diffusion-DPO in terms of training speed (synthetic reward) and overall quality. LoRA weights are here: https://huggingface.co/alexgambashidze/SDXL_NCP-DPO_v0.1

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.17636 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.17636 in a Space README.md to link it from this page.

Collections including this paper 2