chunwoolee0 commited on
Commit
a82a6a7
β€’
1 Parent(s): 33d4e5f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -4
README.md CHANGED
@@ -13,22 +13,51 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # ke_t5_base_bongsoo_ko_en_epoch2
15
 
16
- This model is a fine-tuned version of [chunwoolee0/ke_t5_base_bongsoo_ko_en](https://huggingface.co/chunwoolee0/ke_t5_base_bongsoo_ko_en) on an unknown dataset.
 
17
 
18
  ## Model description
19
 
20
- More information needed
 
 
 
 
 
21
 
22
  ## Intended uses & limitations
23
 
24
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
  ## Training and evaluation data
27
 
28
- More information needed
 
 
 
 
29
 
30
  ## Training procedure
31
 
 
 
 
 
32
  ### Training hyperparameters
33
 
34
  The following hyperparameters were used during training:
@@ -48,6 +77,10 @@ The following hyperparameters were used during training:
48
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
49
  | No log | 1.0 | 5625 | 1.6646 | 12.5566 |
50
 
 
 
 
 
51
 
52
  ### Framework versions
53
 
 
13
 
14
  # ke_t5_base_bongsoo_ko_en_epoch2
15
 
16
+ This model is a fine-tuned version of [chunwoolee0/ke_t5_base_bongsoo_ko_en](https://huggingface.co/chunwoolee0/ke_t5_base_bongsoo_ko_en)
17
+ on [bongsoo/news_news_talk_en_ko](https://huggingface.co/datasets/bongsoo/news_talk_ko_en) dataset.
18
 
19
  ## Model description
20
 
21
+ KE-T5 is a pretrained-model of t5 text-to-text transfer transformers
22
+ using the Korean and English corpus developed by KETI (ν•œκ΅­μ „μžμ—°κ΅¬μ›).
23
+ The vocabulary used by KE-T5 consists of 64,000 sub-word tokens
24
+ and was created using Google's sentencepiece.
25
+ The Sentencepiece model was trained to cover 99.95% of a 30GB corpus
26
+ with an approximate 7:3 mix of Korean and English.
27
 
28
  ## Intended uses & limitations
29
 
30
+ Translation from Korean to English : epoch = 2
31
+
32
+ ```python
33
+ >>> from transformers import pipeline
34
+ >>> translator = pipeline('translation', model='chunwoolee0/ke_t5_base_bongsoo_en_ko')
35
+
36
+ >>> translator("λ‚˜λŠ” μŠ΅κ΄€μ μœΌλ‘œ 점심식사 후에 산책을 ν•œλ‹€.")
37
+ [{'translation_text': 'I habitally walk after lunch.'}]
38
+
39
+ >>> translator("이 κ°•μ’ŒλŠ” ν—ˆκΉ…νŽ˜μ΄μŠ€κ°€ λ§Œλ“  κ±°μ•Ό.")
40
+ [{'translation_text': 'This class was created by Huggface.'}]
41
+
42
+ >>> translator("μ˜€λŠ˜μ€ 늦게 일어났닀.")
43
+ [{'translation_text': 'This day I woke up earlier.'}]
44
+ ```
45
+
46
 
47
  ## Training and evaluation data
48
 
49
+ [bongsoo/news_news_talk_en_ko](https://huggingface.co/datasets/bongsoo/news_talk_ko_en)
50
+
51
+ train : 360000 rows
52
+ test: 20000 rows
53
+ validation 20000 rows
54
 
55
  ## Training procedure
56
 
57
+ Use chunwoolee0/ke_t5_base_bongsoo_ko_en as a pretrained model checkpoint.
58
+ max_token_length is set to 64 for stable training.
59
+ learing rate is reduced from 0.0005 for epoch 1 to 0.00002 here.
60
+
61
  ### Training hyperparameters
62
 
63
  The following hyperparameters were used during training:
 
77
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
78
  | No log | 1.0 | 5625 | 1.6646 | 12.5566 |
79
 
80
+ TrainOutput(global_step=5625, training_loss=1.8157017361111112,
81
+ metrics={'train_runtime': 11137.6996, 'train_samples_per_second': 32.323,
82
+ 'train_steps_per_second': 0.505, 'total_flos': 2.056934156746752e+16,
83
+ 'train_loss': 1.8157017361111112, 'epoch': 1.0})
84
 
85
  ### Framework versions
86