0Tick commited on
Commit
9c8232e
1 Parent(s): 38cc64e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -17
README.md CHANGED
@@ -1,37 +1,38 @@
1
  ---
2
- license: apache-2.0
3
  tags:
4
  - generated_from_trainer
5
  metrics:
6
  - accuracy
7
  model-index:
8
- - name: training\
9
  results: []
 
 
 
 
 
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
 
 
14
 
15
- # training\
16
 
17
- This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
 
 
18
  It achieves the following results on the evaluation set:
19
  - Loss: 4.3983
20
  - Accuracy: 0.3865
21
 
22
- ## Model description
23
-
24
- More information needed
25
-
26
- ## Intended uses & limitations
27
-
28
- More information needed
29
 
30
  ## Training and evaluation data
31
 
32
- More information needed
33
 
34
- ## Training procedure
 
35
 
36
  ### Training hyperparameters
37
 
@@ -44,7 +45,25 @@ The following hyperparameters were used during training:
44
  - lr_scheduler_type: linear
45
  - num_epochs: 3.0
46
 
47
- ### Training results
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
 
50
 
@@ -53,4 +72,4 @@ The following hyperparameters were used during training:
53
  - Transformers 4.27.0.dev0
54
  - Pytorch 1.13.1+cu116
55
  - Datasets 2.9.0
56
- - Tokenizers 0.13.2
 
1
  ---
2
+ license: mit
3
  tags:
4
  - generated_from_trainer
5
  metrics:
6
  - accuracy
7
  model-index:
8
+ - name: e621TagAutocomplete
9
  results: []
10
+ co2_eq_emissions: 100
11
+ language:
12
+ - en
13
+ library_name: transformers
14
+ pipeline_tag: text-generation
15
  ---
16
 
17
+ ## Model description
18
+
19
+ This is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) which is intended to be used with the [promptgen](https://github.com/AUTOMATIC1111/stable-diffusion-webui-promptgen) extension inside the AUTOMATIC1111 WebUI.
20
+ It is trained on the raw tags of e621 with underscores and spaces
21
 
 
22
 
23
+ # Training
24
+
25
+ This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on a dataset of the tags of 116k random posts of e621.net.
26
  It achieves the following results on the evaluation set:
27
  - Loss: 4.3983
28
  - Accuracy: 0.3865
29
 
 
 
 
 
 
 
 
30
 
31
  ## Training and evaluation data
32
 
 
33
 
34
+ Use this collab notebook to train your own model. Also used to train this model
35
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/0Tick/stable-diffusion-tools/blob/main/distilgpt2train.ipynb)
36
 
37
  ### Training hyperparameters
38
 
 
45
  - lr_scheduler_type: linear
46
  - num_epochs: 3.0
47
 
48
+ ## Intended uses & limitations
49
+
50
+ Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model.
51
+
52
+ The developers of GPT-2 state in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including:
53
+
54
+ > - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*
55
+ > - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*
56
+ > - *Entertainment: Creation of games, chat bots, and amusing generations.*
57
+
58
+ Using DistilGPT2, the Hugging Face team built the [Write With Transformers](https://transformer.huggingface.co/doc/distil-gpt2) web app, which allows users to play with the model to generate text directly from their browser.
59
+
60
+ #### Out-of-scope Uses
61
+
62
+ OpenAI states in the GPT-2 [model card](https://github.com/openai/gpt-2/blob/master/model_card.md):
63
+
64
+ > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
65
+ >
66
+ > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.
67
 
68
 
69
 
 
72
  - Transformers 4.27.0.dev0
73
  - Pytorch 1.13.1+cu116
74
  - Datasets 2.9.0
75
+ - Tokenizers 0.13.2