Add pipeline tag: text-to-image
Browse filesThis PR adds the `pipeline_tag: text-to-image` to the model card metadata, making the model discoverable through the Hugging Face model search functionality for text-to-image pipelines.
README.md
CHANGED
@@ -1,13 +1,14 @@
|
|
1 |
---
|
2 |
-
license: apache-2.0
|
3 |
language:
|
4 |
- en
|
5 |
library_name: diffusers
|
|
|
6 |
tags:
|
7 |
- text-to-image
|
8 |
- stable diffusion
|
9 |
- personalization
|
10 |
- msdiffusion
|
|
|
11 |
---
|
12 |
|
13 |
# Introduction
|
@@ -31,4 +32,4 @@ Please refer to our [GitHub repository](https://github.com/MS-Diffusion/MS-Diffu
|
|
31 |
- This repo only contains the trained model checkpoint without data, code, or base models. Please check the GitHub repository carefully to get detailed instructions.
|
32 |
- The `scale` parameter is used to determine the extent of image control. For default, the `scale` is set to 0.6. In practice, the `scale` of 0.4 would be better if your input contains subjects needing to effect on the whole image, such as the background. **Feel free to adjust the `scale` in your applications.**
|
33 |
- The model prefers to need layout inputs. You can use the default layouts in the inference script, while more accurate and realistic layouts generate better results.
|
34 |
-
- Though MS-Diffusion beats SOTA personalized diffusion methods in both single-subject and multi-subject generation, it still suffers from the influence of background in subject images. The best practice is to use masked images since they contain no irrelevant information.
|
|
|
1 |
---
|
|
|
2 |
language:
|
3 |
- en
|
4 |
library_name: diffusers
|
5 |
+
license: apache-2.0
|
6 |
tags:
|
7 |
- text-to-image
|
8 |
- stable diffusion
|
9 |
- personalization
|
10 |
- msdiffusion
|
11 |
+
pipeline_tag: text-to-image
|
12 |
---
|
13 |
|
14 |
# Introduction
|
|
|
32 |
- This repo only contains the trained model checkpoint without data, code, or base models. Please check the GitHub repository carefully to get detailed instructions.
|
33 |
- The `scale` parameter is used to determine the extent of image control. For default, the `scale` is set to 0.6. In practice, the `scale` of 0.4 would be better if your input contains subjects needing to effect on the whole image, such as the background. **Feel free to adjust the `scale` in your applications.**
|
34 |
- The model prefers to need layout inputs. You can use the default layouts in the inference script, while more accurate and realistic layouts generate better results.
|
35 |
+
- Though MS-Diffusion beats SOTA personalized diffusion methods in both single-subject and multi-subject generation, it still suffers from the influence of background in subject images. The best practice is to use masked images since they contain no irrelevant information.
|