Example code in the model card is broken
I'm using the latest transformers:
pip install git+https://github.com/huggingface/transformers
The script fails on the first line :
ImportError: cannot import name 'AutoModelForImageGeneration' from 'transformers' (C:\Users\~~~\AppData\Local\Programs\Python\Python311\Lib\site-packages\transformers\__init__.py)
Removing the import
from transformers import AutoTokenizer #, AutoModelForImageGeneration
and the line that uses it
model = AutoModelForImageGeneration.from_pretrained(model_name, use_auth_token=API_TOKEN)
gets me to this error:
Error loading model: Unrecognized model in future-technologies/Floral-High-Dynamic-Range. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: albert, ..., zoedepth
Note, the variables "model" and "tokenizer" don't appear to be used anywhere else in this script.
Commenting out the model loading try/except block gives this error:
Error initializing pipeline: name 'FluxPipeline' is not defined
Easily fixable by adding to the import statement:
from diffusers import DiffusionPipeline, FluxPipeline
Now it gives this error (also a warning that FluxPipeline ignores "use_auth_token"):
Error initializing pipeline: module diffusers has no attribute HDRTransformer2DModel
Now this error I can't get past. I have updated my diffusers to the latest release:
install git+https://github.com/huggingface/diffusers
I still get the same error. If I swap out a different FLUX model, the modified script runs fine:
model_name = "black-forest-labs/FLUX.1-dev"
For the completeness, the modified script is below:
from transformers import AutoTokenizer #, AutoModelForImageGeneration
from diffusers import DiffusionPipeline, FluxPipeline
import torch
from PIL import Image
import requests
from io import BytesIO
# Your Hugging Face API token
API_TOKEN = "<retacted>"
# Load the model and tokenizer from Hugging Face
model_name = "future-technologies/Floral-High-Dynamic-Range"
#model_name = "black-forest-labs/FLUX.1-dev"
'''
# Error handling for model loading
try:
#model = AutoModelForImageGeneration.from_pretrained(model_name, use_auth_token=API_TOKEN)
tokenizer = AutoTokenizer.from_pretrained(model_name, token=API_TOKEN)
except Exception as e:
print(f"Error loading model: {e}")
exit()
'''
# Initialize the diffusion pipeline
try:
pipe = FluxPipeline.from_pretrained(model_name)
pipe.to("cuda" if torch.cuda.is_available() else "cpu")
except Exception as e:
print(f"Error initializing pipeline: {e}")
exit()
# Example prompt for image generation
prompt = "A futuristic city skyline with glowing skyscrapers during sunset, reflecting the light."
# Error handling for image generation
try:
result = pipe(prompt)
image = result.images[0]
except Exception as e:
print(f"Error generating image: {e}")
exit()
# Save or display the image
try:
image.save("floral-hdr.png")
image.show()
except Exception as e:
print(f"Error saving or displaying image: {e}")
exit()
print("Image generation and saving successful!")
Thank you so much for your valuable feedback. We’re actively working on enhancements, including developing a robust Python script designed to boost our model’s performance and efficiency. Your insights are incredibly helpful, and we encourage you to keep sharing your thoughts. We’ll be updating our model card with these improvements soon. Thank you for your continued support!