File size: 2,580 Bytes
5c1a54e 7e1b66b 4670e5e 7e1b66b 4670e5e 7e1b66b 4670e5e 9bc113e 4670e5e 9bc113e ea5ef32 9bc113e ea5ef32 9bc113e 4670e5e 88bec91 5c1a54e c787ab0 debccb3 c787ab0 bdfe582 c787ab0 6fb7a22 f475222 6fb7a22 f475222 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 |
---
language:
- en
license: apache-2.0
multilinguality:
- monolingual
source_datasets:
- bartman081523/stable-diffusion-discord-prompts
- succinctly/midjourney-prompts
- Gustavosta/Stable-Diffusion-Prompts
pretty_name: multi text2image prompts a dataset collection
tags:
- text generation
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- config_name: original
data_files:
- split: train
path: original/train-*
- split: test
path: original/test-*
dataset_info:
- config_name: default
features:
- name: text
dtype: string
- name: src_dataset
dtype: string
splits:
- name: train
num_bytes: 262736830
num_examples: 1677221
- name: test
num_bytes: 56294291
num_examples: 292876
download_size: 151054782
dataset_size: 319031121
- config_name: original
features:
- name: text
dtype: string
- name: src_dataset
dtype: string
splits:
- name: train
num_bytes: 741427383
num_examples: 3551734
- name: test
num_bytes: 83615440
num_examples: 399393
download_size: 402186258
dataset_size: 825042823
task_categories:
- text-generation
- feature-extraction
---
# text2image multi-prompt(s): a dataset collection
- collection of several text2image prompt datasets
- data was cleaned/normalized with the goal of removing "model specific APIs" like the "--ar" for Midjourney and so on
- data de-duplicated on a basic level: exactly duplicate prompts were dropped (_after cleaning and normalization_)
## updates
- Oct 2023: the `default` config has been updated with better deduplication. It was deduplicated with minhash (_params: n-gram size set to 3, deduplication threshold at 0.6, hash function chosen as xxh3 with 32-bit hash bits, and 128 permutations with a batch size of 10,000._) which drops 2+ million rows.
- original version is still available under `config_name="original"`
## contents
default:
```
DatasetDict({
train: Dataset({
features: ['text', 'src_dataset'],
num_rows: 1677221
})
test: Dataset({
features: ['text', 'src_dataset'],
num_rows: 292876
})
})
```
For `original` config:
```
DatasetDict({
train: Dataset({
features: ['text', 'src_dataset'],
num_rows: 3551734
})
test: Dataset({
features: ['text', 'src_dataset'],
num_rows: 399393
})
})
```
_NOTE: as the other two datasets did not have a `validation` split, the validation split of `succinctly/midjourney-prompts` was merged into `train`._ |