HeyLucasLeao
commited on
Commit
•
6c55098
1
Parent(s):
fa013b0
Update README.md
Browse files
README.md
CHANGED
@@ -4,10 +4,10 @@
|
|
4 |
This is a finetuned version from GPT-Neo 125M by EletheurAI to Portuguese language.
|
5 |
|
6 |
#### Training data
|
7 |
-
It was
|
8 |
|
9 |
#### Training Procedure
|
10 |
-
Every text was passed through a GPT2-Tokenizer with bos and eos tokens to separate
|
11 |
|
12 |
##### Learning Rate: **2e-4**
|
13 |
##### Epochs: **1**
|
@@ -46,8 +46,8 @@ sample_outputs = model.generate(generated,
|
|
46 |
|
47 |
# Decoding and printing sequences
|
48 |
for i, sample_output in enumerate(sample_outputs):
|
49 |
-
print(">> Generated text {}
|
50 |
-
|
51 |
{}".format(i+1, tokenizer.decode(sample_output.tolist())))
|
52 |
|
53 |
# >> Generated text
|
|
|
4 |
This is a finetuned version from GPT-Neo 125M by EletheurAI to Portuguese language.
|
5 |
|
6 |
#### Training data
|
7 |
+
It was trained from 227,382 selected texts from a PTWiki Dump. You can found all the data from here: https://archive.org/details/ptwiki-dump-20210520
|
8 |
|
9 |
#### Training Procedure
|
10 |
+
Every text was passed through a GPT2-Tokenizer with bos and eos tokens to separate them, with max sequence length that the GPT-Neo could support. It was finetuned using the default metrics of the Trainer Class, available on the Hugging Face library.
|
11 |
|
12 |
##### Learning Rate: **2e-4**
|
13 |
##### Epochs: **1**
|
|
|
46 |
|
47 |
# Decoding and printing sequences
|
48 |
for i, sample_output in enumerate(sample_outputs):
|
49 |
+
print(">> Generated text {}\\\\
|
50 |
+
\\\\
|
51 |
{}".format(i+1, tokenizer.decode(sample_output.tolist())))
|
52 |
|
53 |
# >> Generated text
|