blancsw commited on
Commit
6770b93
1 Parent(s): affc628

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +182 -0
README.md ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: papluca/xlm-roberta-base-language-detection
3
+ language:
4
+ - ar
5
+ - bg
6
+ - de
7
+ - el
8
+ - en
9
+ - es
10
+ - fr
11
+ - hi
12
+ - it
13
+ - ja
14
+ - nl
15
+ - pl
16
+ - pt
17
+ - ru
18
+ - sw
19
+ - th
20
+ - tr
21
+ - ur
22
+ - vi
23
+ - zh
24
+ - multilingual
25
+ license: mit
26
+ pipeline_tag: text-classification
27
+ tags:
28
+ - text-classification
29
+ - onnx
30
+ ---
31
+
32
+ # xlm-roberta-base-language-detection
33
+
34
+ This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [Language Identification](https://huggingface.co/datasets/papluca/language-identification#additional-information) dataset.
35
+
36
+ ## Model description
37
+
38
+ This model is an XLM-RoBERTa transformer model with a classification head on top (i.e. a linear layer on top of the pooled output).
39
+ For additional information please refer to the [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) model card or to the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Conneau et al.
40
+
41
+ ## Intended uses & limitations
42
+
43
+ You can directly use this model as a language detector, i.e. for sequence classification tasks. Currently, it supports the following 20 languages:
44
+
45
+ `arabic (ar), bulgarian (bg), german (de), modern greek (el), english (en), spanish (es), french (fr), hindi (hi), italian (it), japanese (ja), dutch (nl), polish (pl), portuguese (pt), russian (ru), swahili (sw), thai (th), turkish (tr), urdu (ur), vietnamese (vi), and chinese (zh)`
46
+
47
+ ## Training and evaluation data
48
+
49
+ The model was fine-tuned on the [Language Identification](https://huggingface.co/datasets/papluca/language-identification#additional-information) dataset, which consists of text sequences in 20 languages. The training set contains 70k samples, while the validation and test sets 10k each. The average accuracy on the test set is **99.6%** (this matches the average macro/weighted F1-score being the test set perfectly balanced). A more detailed evaluation is provided by the following table.
50
+
51
+ | Language | Precision | Recall | F1-score | support |
52
+ |:--------:|:---------:|:------:|:--------:|:-------:|
53
+ |ar |0.998 |0.996 |0.997 |500 |
54
+ |bg |0.998 |0.964 |0.981 |500 |
55
+ |de |0.998 |0.996 |0.997 |500 |
56
+ |el |0.996 |1.000 |0.998 |500 |
57
+ |en |1.000 |1.000 |1.000 |500 |
58
+ |es |0.967 |1.000 |0.983 |500 |
59
+ |fr |1.000 |1.000 |1.000 |500 |
60
+ |hi |0.994 |0.992 |0.993 |500 |
61
+ |it |1.000 |0.992 |0.996 |500 |
62
+ |ja |0.996 |0.996 |0.996 |500 |
63
+ |nl |1.000 |1.000 |1.000 |500 |
64
+ |pl |1.000 |1.000 |1.000 |500 |
65
+ |pt |0.988 |1.000 |0.994 |500 |
66
+ |ru |1.000 |0.994 |0.997 |500 |
67
+ |sw |1.000 |1.000 |1.000 |500 |
68
+ |th |1.000 |0.998 |0.999 |500 |
69
+ |tr |0.994 |0.992 |0.993 |500 |
70
+ |ur |1.000 |1.000 |1.000 |500 |
71
+ |vi |0.992 |1.000 |0.996 |500 |
72
+ |zh |1.000 |1.000 |1.000 |500 |
73
+
74
+ ### Benchmarks
75
+
76
+ As a baseline to compare `xlm-roberta-base-language-detection` against, we have used the Python [langid](https://github.com/saffsd/langid.py) library. Since it comes pre-trained on 97 languages, we have used its `.set_languages()` method to constrain the language set to our 20 languages. The average accuracy of langid on the test set is **98.5%**. More details are provided by the table below.
77
+
78
+ | Language | Precision | Recall | F1-score | support |
79
+ |:--------:|:---------:|:------:|:--------:|:-------:|
80
+ |ar |0.990 |0.970 |0.980 |500 |
81
+ |bg |0.998 |0.964 |0.981 |500 |
82
+ |de |0.992 |0.944 |0.967 |500 |
83
+ |el |1.000 |0.998 |0.999 |500 |
84
+ |en |1.000 |1.000 |1.000 |500 |
85
+ |es |1.000 |0.968 |0.984 |500 |
86
+ |fr |0.996 |1.000 |0.998 |500 |
87
+ |hi |0.949 |0.976 |0.963 |500 |
88
+ |it |0.990 |0.980 |0.985 |500 |
89
+ |ja |0.927 |0.988 |0.956 |500 |
90
+ |nl |0.980 |1.000 |0.990 |500 |
91
+ |pl |0.986 |0.996 |0.991 |500 |
92
+ |pt |0.950 |0.996 |0.973 |500 |
93
+ |ru |0.996 |0.974 |0.985 |500 |
94
+ |sw |1.000 |1.000 |1.000 |500 |
95
+ |th |1.000 |0.996 |0.998 |500 |
96
+ |tr |0.990 |0.968 |0.979 |500 |
97
+ |ur |0.998 |0.996 |0.997 |500 |
98
+ |vi |0.971 |0.990 |0.980 |500 |
99
+ |zh |1.000 |1.000 |1.000 |500 |
100
+
101
+ ## How to get started with the model
102
+
103
+ The easiest way to use the model is via the high-level `pipeline` API:
104
+
105
+ ```python
106
+ from transformers import pipeline
107
+
108
+ text = [
109
+ "Brevity is the soul of wit.",
110
+ "Amor, ch'a nullo amato amar perdona."
111
+ ]
112
+
113
+ model_ckpt = "papluca/xlm-roberta-base-language-detection"
114
+ pipe = pipeline("text-classification", model=model_ckpt)
115
+ pipe(text, top_k=1, truncation=True)
116
+ ```
117
+
118
+ Or one can proceed with the tokenizer and model separately:
119
+
120
+ ```python
121
+ import torch
122
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
123
+
124
+ text = [
125
+ "Brevity is the soul of wit.",
126
+ "Amor, ch'a nullo amato amar perdona."
127
+ ]
128
+
129
+ model_ckpt = "papluca/xlm-roberta-base-language-detection"
130
+ tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
131
+ model = AutoModelForSequenceClassification.from_pretrained(model_ckpt)
132
+
133
+ inputs = tokenizer(text, padding=True, truncation=True, return_tensors="pt")
134
+
135
+ with torch.no_grad():
136
+ logits = model(**inputs).logits
137
+
138
+ preds = torch.softmax(logits, dim=-1)
139
+
140
+ # Map raw predictions to languages
141
+ id2lang = model.config.id2label
142
+ vals, idxs = torch.max(preds, dim=1)
143
+ {id2lang[k.item()]: v.item() for k, v in zip(idxs, vals)}
144
+ ```
145
+
146
+ ## Training procedure
147
+
148
+ Fine-tuning was done via the `Trainer` API. Here is the [Colab notebook](https://colab.research.google.com/drive/15LJTckS6gU3RQOmjLqxVNBmbsBdnUEvl?usp=sharing) with the training code.
149
+
150
+ ### Training hyperparameters
151
+
152
+ The following hyperparameters were used during training:
153
+ - learning_rate: 2e-05
154
+ - train_batch_size: 64
155
+ - eval_batch_size: 128
156
+ - seed: 42
157
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
158
+ - lr_scheduler_type: linear
159
+ - num_epochs: 2
160
+ - mixed_precision_training: Native AMP
161
+
162
+ ### Training results
163
+
164
+ The validation results on the `valid` split of the Language Identification dataset are summarised here below.
165
+
166
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
167
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
168
+ | 0.2492 | 1.0 | 1094 | 0.0149 | 0.9969 | 0.9969 |
169
+ | 0.0101 | 2.0 | 2188 | 0.0103 | 0.9977 | 0.9977 |
170
+
171
+ In short, it achieves the following results on the validation set:
172
+ - Loss: 0.0101
173
+ - Accuracy: 0.9977
174
+ - F1: 0.9977
175
+
176
+ ### Framework versions
177
+
178
+ - Transformers 4.12.5
179
+ - Pytorch 1.10.0+cu111
180
+ - Datasets 1.15.1
181
+ - Tokenizers 0.10.3
182
+