Datasets:
npz
dict | __key__
stringlengths 1
7
| __url__
stringclasses 4
values |
---|---|---|
{"crop_ltrb":[0.0,0.0,1024.0,1024.0],"latents":[[[12.609375,8.4140625,7.21875,7.1132812,7.0351562,7.(...TRUNCATED) | 1000 | "hf://datasets/6DammK9/danbooru2024-latents-sdxl-1ktar@a8941749f3a93a55494e71c21d8cbc78d0dedaa1/late(...TRUNCATED) |
{"crop_ltrb":[0.0,11.0,832.0,1011.2257053291536],"latents":[[[-7.8046875,-8.1796875,-10.484375,-10.7(...TRUNCATED) | 2000 | "hf://datasets/6DammK9/danbooru2024-latents-sdxl-1ktar@a8941749f3a93a55494e71c21d8cbc78d0dedaa1/late(...TRUNCATED) |
{"crop_ltrb":[19.0,0.0,1004.0100200400801,768.0],"latents":[[[2.0273438,1.3447266,1.2744141,0.532226(...TRUNCATED) | 3000 | "hf://datasets/6DammK9/danbooru2024-latents-sdxl-1ktar@a8941749f3a93a55494e71c21d8cbc78d0dedaa1/late(...TRUNCATED) |
{"crop_ltrb":[0.0,0.0,1024.0,768.0],"latents":[[[-6.5859375,-4.0546875,-4.9882812,-1.6279297,-6.5468(...TRUNCATED) | 5000 | "hf://datasets/6DammK9/danbooru2024-latents-sdxl-1ktar@a8941749f3a93a55494e71c21d8cbc78d0dedaa1/late(...TRUNCATED) |
{"crop_ltrb":[12.0,0.0,882.1875518672199,1024.0],"latents":[[[-2.9785156,-2.4003906,1.6777344,4.6757(...TRUNCATED) | 6000 | "hf://datasets/6DammK9/danbooru2024-latents-sdxl-1ktar@a8941749f3a93a55494e71c21d8cbc78d0dedaa1/late(...TRUNCATED) |
{"crop_ltrb":[0.0,14.0,704.0,1008.7826086956521],"latents":[[[3.0058594,-0.10455322,0.9433594,7.5117(...TRUNCATED) | 7000 | "hf://datasets/6DammK9/danbooru2024-latents-sdxl-1ktar@a8941749f3a93a55494e71c21d8cbc78d0dedaa1/late(...TRUNCATED) |
{"crop_ltrb":[0.0,4.0,832.0,1018.6341463414635],"latents":[[[19.53125,19.40625,17.859375,17.6875,17.(...TRUNCATED) | 8000 | "hf://datasets/6DammK9/danbooru2024-latents-sdxl-1ktar@a8941749f3a93a55494e71c21d8cbc78d0dedaa1/late(...TRUNCATED) |
{"crop_ltrb":[0.0,18.0,704.0,1004.2068965517242],"latents":[[[6.484375,7.3164062,5.2265625,3.7324219(...TRUNCATED) | 9000 | "hf://datasets/6DammK9/danbooru2024-latents-sdxl-1ktar@a8941749f3a93a55494e71c21d8cbc78d0dedaa1/late(...TRUNCATED) |
{"crop_ltrb":[0.0,19.0,704.0,1004.6],"latents":[[[17.953125,17.71875,21.28125,4.8671875,21.484375,17(...TRUNCATED) | 10000 | "hf://datasets/6DammK9/danbooru2024-latents-sdxl-1ktar@a8941749f3a93a55494e71c21d8cbc78d0dedaa1/late(...TRUNCATED) |
{"crop_ltrb":[0.0,0.0,702.423645320197,1024.0],"latents":[[[19.1875,19.171875,17.640625,17.453125,17(...TRUNCATED) | 11000 | "hf://datasets/6DammK9/danbooru2024-latents-sdxl-1ktar@a8941749f3a93a55494e71c21d8cbc78d0dedaa1/late(...TRUNCATED) |
End of preview. Expand
in Data Studio
Danbooru 2024 SDXL VAE latents in 1k tar
- Dedicated dataset to align deepghs/danbooru2024-webp-4Mpixel. "4MP-Focus" for average raw image resolution.
- Latents are ARB with maximum size of 1024x1024 as the recommended setting in kohyas. Major reason is to make sure I can finetune with RTX 3090. VRAM usage will raise drastically after 1024.
- Generated from prepare_buckets_latents_v2.py, modified from prepare_buckets_latents.py.
- Used for kohya-ss/sd-scripts. In theory it may replace
*.webp
and*.txt
along with meta_lat.json. Raw data is no longer required. - In a more extreme case, I have made
*.caption
with meta_lat_v2.json, which is 14GB when extracted. See 6DammK9/danbooru2024-captions-1ktar for technical details. - It took me around 16+ days with 4x RTX 3090 to generate (with many PSU trips, I/O deadlocks, SSD offline, corrupted python env etc.). Perfect case would be 10 days only (10 it/s).
- Download along with webp / txt, and then extract them all to single directory, and then you are good to go. Tags available in 6DammK9/danbooru2024-tags-1ktar.
- I still don't know how to work with multigpu trainning in Windows. Ultimately I may need to switch trainer. Use this repo if you are working well already.
- The used VAE: madebyollin/sdxl-vae-fp16-fix
- Verify with verify_npz.py. It should take 15 minutes (16 * 600 it/s) if OS is super stable and you have a nice U.2 (Intel P4510 4T) and CPU (Intel Xeon 8358).
> python ../sd-scripts-runtime/pack_npz.py --npz_dir="H:/danbooru2024-webp-4Mpixel/kohyas_finetune" --meta_json="H:/danbooru2024-webp-4Mpixel/meta_cap_dd.json" --tar_dir="G:/npz_latents/danbooru_sdxl"
Found entries: 8005010
Max ID in the dataset: 8360499
packing npz files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [4:02:08<00:00, 14.53s/it]
Files written: 1000
Detected npz: 8005010.
Extra: 12.5M Merged dataset for both danbooru and e621
Check out the meta_lat_merged.tar.gz. It is 23.8GB when decompressed.
The keys are casted in such pattern:
#250225: Relative to --train_data_dir="/tmp/dataset"
FOLDER_A = "danbooru/"
FOLDER_B = "e621/"
merged = {}
def cast_a(k):
return f"{FOLDER_A}{k}"
def cast_b(k):
return f"{FOLDER_B}{k}"
- One of the best apporach is create a nested folder like
/tmp/dataset/danbooru
and/tmp/dataset/e621
. Kohyas (torch.data.DataLoader
) will support localized path.
- Downloads last month
- 15