chunwoolee0
commited on
Commit
•
7026bff
1
Parent(s):
37b699c
Update README.md
Browse files
README.md
CHANGED
@@ -8,12 +8,12 @@ model-index:
|
|
8 |
results: []
|
9 |
---
|
10 |
|
11 |
-
|
12 |
-
probably proofread and complete it, then remove this comment. -->
|
13 |
|
14 |
# chunwoolee0/distilgpt2_eli5_clm
|
15 |
|
16 |
-
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2)
|
|
|
17 |
It achieves the following results on the evaluation set:
|
18 |
- Train Loss: 3.7237
|
19 |
- Validation Loss: 3.7528
|
@@ -21,15 +21,15 @@ It achieves the following results on the evaluation set:
|
|
21 |
|
22 |
## Model description
|
23 |
|
24 |
-
|
25 |
|
26 |
## Intended uses & limitations
|
27 |
|
28 |
-
|
29 |
|
30 |
## Training and evaluation data
|
31 |
|
32 |
-
|
33 |
|
34 |
## Training procedure
|
35 |
|
|
|
8 |
results: []
|
9 |
---
|
10 |
|
11 |
+
|
|
|
12 |
|
13 |
# chunwoolee0/distilgpt2_eli5_clm
|
14 |
|
15 |
+
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2)
|
16 |
+
on an eli5 dataset.
|
17 |
It achieves the following results on the evaluation set:
|
18 |
- Train Loss: 3.7237
|
19 |
- Validation Loss: 3.7528
|
|
|
21 |
|
22 |
## Model description
|
23 |
|
24 |
+
DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using knowledge distillation and was designed to be a faster, lighter version of GPT-2.
|
25 |
|
26 |
## Intended uses & limitations
|
27 |
|
28 |
+
This is an exercise for finetuning of the pretrained causal language model.
|
29 |
|
30 |
## Training and evaluation data
|
31 |
|
32 |
+
|
33 |
|
34 |
## Training procedure
|
35 |
|