schnell commited on
Commit
95939aa
·
1 Parent(s): f321dfd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -50,4 +50,8 @@ output = model(**encoded_input)
50
 
51
  The texts are normalized using zenhan, segmented into words using Juman++, and tokenized using SentencePiece. Juman++ 2.0.0-rc3 was used for pretraining.
52
 
53
- The model was trained on 8 NVIDIA A100 GPUs.
 
 
 
 
 
50
 
51
  The texts are normalized using zenhan, segmented into words using Juman++, and tokenized using SentencePiece. Juman++ 2.0.0-rc3 was used for pretraining.
52
 
53
+ The model was trained on 8 NVIDIA A100 GPUs.
54
+
55
+
56
+ # Acknowledgments
57
+ In this research work, we used the “mdx: a platform for the data-driven future”.