Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

edge-maxxing-newdream-sdxl

This holds the baseline for the SDXL Nvidia GeForce RTX 4090 contest, which can be forked freely and optimized

Some recommendations are as follows:

  • Installing dependencies should be done in pyproject.toml, including git dependencies
  • Compiled models should be included directly in the repository(rather than compiling during loading), loading time matters far more than file sizes
  • Avoid changing src/main.py, as that includes mostly protocol logic. Most changes should be in models and src/pipeline.py
  • Change requirements.txt to add extra arguments to be used when installing the package

For testing, you need a docker container with pytorch and ubuntu 22.04, you can download your listed dependencies with pip install -r requirements.txt -e ., and then running start_inference

Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.