ohashi56225 commited on
Commit
4876a2a
·
verified ·
1 Parent(s): d621698

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -0
README.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ base_model: retrieva-jp/t5-large-long
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: checkpoints
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # checkpoints
15
+
16
+ This model is a fine-tuned version of [retrieva-jp/t5-large-long](https://huggingface.co/retrieva-jp/t5-large-long) on the [JMultiWOZ dataset](https://github.com/nu-dialogue/jmultiwoz).
17
+
18
+ The details of this model, the dataset, and the usage are described in JMultiWOZ repository: [nu-dialogue/jmultiwoz](https://github.com/nu-dialogue/jmultiwoz)
19
+
20
+
21
+ ### Training hyperparameters
22
+
23
+ The following hyperparameters were used during training:
24
+ - learning_rate: 5e-05
25
+ - train_batch_size: 2
26
+ - eval_batch_size: 2
27
+ - seed: 42
28
+ - distributed_type: multi-GPU
29
+ - num_devices: 16
30
+ - total_train_batch_size: 32
31
+ - total_eval_batch_size: 32
32
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
33
+ - lr_scheduler_type: linear
34
+ - num_epochs: 5.0
35
+
36
+ ### Framework versions
37
+
38
+ - Transformers 4.35.0.dev0
39
+ - Pytorch 1.13.1+cu116
40
+ - Datasets 2.14.5
41
+ - Tokenizers 0.14.1