PommesPeter
commited on
Commit
β’
61b5a5d
1
Parent(s):
ba76561
Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ tags:
|
|
7 |
|
8 |
# Lumina-Next-T2I
|
9 |
|
10 |
-
The `Lumina-Next-T2I` model
|
11 |
|
12 |
Our generative model has `Next-DiT` as the backbone, the text encoder is the `Gemma` 2B model, and the VAE uses a version of `sdxl` fine-tuned by stabilityai.
|
13 |
|
@@ -19,7 +19,7 @@ Our generative model has `Next-DiT` as the backbone, the text encoder is the `Ge
|
|
19 |
|
20 |
## π° News
|
21 |
|
22 |
-
- [2024-5-28] πππ We updated the `Lumina-Next-T2I` model
|
23 |
|
24 |
- [2024-5-16] βββ We have converted the `.pth` weights to `.safetensors` weights. Please pull the latest code to use `demo.py` for inference.
|
25 |
|
@@ -51,7 +51,7 @@ On some outdated distros (e.g., CentOS 7), you may also want to check that a lat
|
|
51 |
gcc --version
|
52 |
```
|
53 |
|
54 |
-
Downloading Lumina-T2X repo from
|
55 |
|
56 |
```bash
|
57 |
git clone https://github.com/Alpha-VLLM/Lumina-T2X
|
@@ -125,9 +125,9 @@ To ensure that our generative model is ready to use right out of the box, we pro
|
|
125 |
pip install -e .
|
126 |
```
|
127 |
|
128 |
-
2. Prepare the
|
129 |
|
130 |
-
ββ (
|
131 |
|
132 |
```bash
|
133 |
huggingface-cli download --resume-download Alpha-VLLM/Lumina-Next-T2I --local-dir /path/to/ckpt
|
@@ -213,7 +213,7 @@ e.g. Demo command:
|
|
213 |
|
214 |
```bash
|
215 |
cd lumina_next_t2i
|
216 |
-
lumina_next infer -c "config/infer/settings.yaml" "a
|
217 |
```
|
218 |
|
219 |
### Web Demo
|
|
|
7 |
|
8 |
# Lumina-Next-T2I
|
9 |
|
10 |
+
The `Lumina-Next-T2I` model uses Next-DiT with a 2B parameters model as well as using [Gemma-2B](https://huggingface.co/google/gemma-2b) as a text encoder. Compared with `Lumina-T2I`, it has faster inference speed, richer generation style, and more multilingual support, etc.
|
11 |
|
12 |
Our generative model has `Next-DiT` as the backbone, the text encoder is the `Gemma` 2B model, and the VAE uses a version of `sdxl` fine-tuned by stabilityai.
|
13 |
|
|
|
19 |
|
20 |
## π° News
|
21 |
|
22 |
+
- [2024-5-28] πππ We updated the `Lumina-Next-T2I` model to support 2K Resolution image generation.
|
23 |
|
24 |
- [2024-5-16] βββ We have converted the `.pth` weights to `.safetensors` weights. Please pull the latest code to use `demo.py` for inference.
|
25 |
|
|
|
51 |
gcc --version
|
52 |
```
|
53 |
|
54 |
+
Downloading Lumina-T2X repo from GitHub:
|
55 |
|
56 |
```bash
|
57 |
git clone https://github.com/Alpha-VLLM/Lumina-T2X
|
|
|
125 |
pip install -e .
|
126 |
```
|
127 |
|
128 |
+
2. Prepare the pre-trained model
|
129 |
|
130 |
+
ββ (Recommended) you can use huggingface_cli to download our model:
|
131 |
|
132 |
```bash
|
133 |
huggingface-cli download --resume-download Alpha-VLLM/Lumina-Next-T2I --local-dir /path/to/ckpt
|
|
|
213 |
|
214 |
```bash
|
215 |
cd lumina_next_t2i
|
216 |
+
lumina_next infer -c "config/infer/settings.yaml" "a snowman of ..." "./outputs"
|
217 |
```
|
218 |
|
219 |
### Web Demo
|