--- library_name: diffusers license: mit pipeline_tag: unconditional-image-generation --- # Autoregressive Image Generation without Vector Quantization ## About This model (MAR) introduces a novel approach to autoregressive image generation by eliminating the need for vector quantization. Instead of relying on discrete tokens, the model operates in a continuous-valued space using a diffusion process to model the per-token probability distribution. By employing a Diffusion Loss function, the model achieves efficient and high-quality image generation while benefiting from the speed advantages of autoregressive sequence modeling. This approach simplifies the generation process, making it applicable to broader continuous-valued domains beyond just image synthesis. It is based on [this paper](https://arxiv.org/abs/2406.11838) ## Usage: You can easily load it through the Hugging Face `DiffusionPipeline` and optionally customize various parameters such as the model type, number of steps, and class labels. ```python from diffusers import DiffusionPipeline # load the pretrained model pipeline = DiffusionPipeline.from_pretrained("jadechoghari/mar", trust_remote_code=True, custom_pipeline="jadechoghari/mar") # generate an image with the model generated_image = pipeline( model_type="mar_huge", # choose from 'mar_base', 'mar_large', or 'mar_huge' seed=42, # set a seed for reproducibility num_ar_steps=64, # number of autoregressive steps class_labels=[207, 360, 388], # provide valid ImageNet class labels cfg_scale=4, # classifier-free guidance scale output_dir="./images", # directory to save generated images cfg_schedule = "constant", # choose between 'constant' (suggested) and 'linear' ) # display the generated image generated_image.show() ```