julien-c HF staff commited on
Commit
5d97c6b
1 Parent(s): 08ad285

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/zanelim/singbert/README.md

Files changed (1) hide show
  1. README.md +215 -0
README.md ADDED
@@ -0,0 +1,215 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - singapore
5
+ - sg
6
+ - singlish
7
+ - malaysia
8
+ - ms
9
+ - manglish
10
+ - bert-base-uncased
11
+ license: mit
12
+ datasets:
13
+ - reddit singapore, malaysia
14
+ - hardwarezone
15
+ widget:
16
+ - text: "kopi c siew [MASK]"
17
+ - text: "die [MASK] must try"
18
+ ---
19
+
20
+ # Model name
21
+
22
+ SingBert - Bert for Singlish (SG) and Manglish (MY).
23
+
24
+ ## Model description
25
+
26
+ [BERT base uncased](https://github.com/google-research/bert#pre-trained-models), with pre-training finetuned on
27
+ [singlish](https://en.wikipedia.org/wiki/Singlish) and [manglish](https://en.wikipedia.org/wiki/Manglish) data.
28
+
29
+ ## Intended uses & limitations
30
+
31
+ #### How to use
32
+
33
+ ```python
34
+ >>> from transformers import pipeline
35
+ >>> nlp = pipeline('fill-mask', model='zanelim/singbert')
36
+ >>> nlp("kopi c siew [MASK]")
37
+
38
+ [{'sequence': '[CLS] kopi c siew dai [SEP]',
39
+ 'score': 0.5092713236808777,
40
+ 'token': 18765,
41
+ 'token_str': 'dai'},
42
+ {'sequence': '[CLS] kopi c siew mai [SEP]',
43
+ 'score': 0.3515934646129608,
44
+ 'token': 14736,
45
+ 'token_str': 'mai'},
46
+ {'sequence': '[CLS] kopi c siew bao [SEP]',
47
+ 'score': 0.05576375499367714,
48
+ 'token': 25945,
49
+ 'token_str': 'bao'},
50
+ {'sequence': '[CLS] kopi c siew. [SEP]',
51
+ 'score': 0.006019321270287037,
52
+ 'token': 1012,
53
+ 'token_str': '.'},
54
+ {'sequence': '[CLS] kopi c siew sai [SEP]',
55
+ 'score': 0.0038361591286957264,
56
+ 'token': 18952,
57
+ 'token_str': 'sai'}]
58
+
59
+ >>> nlp("one teh c siew dai, and one kopi [MASK].")
60
+
61
+ [{'sequence': '[CLS] one teh c siew dai, and one kopi c [SEP]',
62
+ 'score': 0.6176503300666809,
63
+ 'token': 1039,
64
+ 'token_str': 'c'},
65
+ {'sequence': '[CLS] one teh c siew dai, and one kopi o [SEP]',
66
+ 'score': 0.21094971895217896,
67
+ 'token': 1051,
68
+ 'token_str': 'o'},
69
+ {'sequence': '[CLS] one teh c siew dai, and one kopi. [SEP]',
70
+ 'score': 0.13027705252170563,
71
+ 'token': 1012,
72
+ 'token_str': '.'},
73
+ {'sequence': '[CLS] one teh c siew dai, and one kopi! [SEP]',
74
+ 'score': 0.004680239595472813,
75
+ 'token': 999,
76
+ 'token_str': '!'},
77
+ {'sequence': '[CLS] one teh c siew dai, and one kopi w [SEP]',
78
+ 'score': 0.002034128177911043,
79
+ 'token': 1059,
80
+ 'token_str': 'w'}]
81
+
82
+ >>> nlp("dont play [MASK] leh")
83
+
84
+ [{'sequence': '[CLS] dont play play leh [SEP]',
85
+ 'score': 0.9281464219093323,
86
+ 'token': 2377,
87
+ 'token_str': 'play'},
88
+ {'sequence': '[CLS] dont play politics leh [SEP]',
89
+ 'score': 0.010990909300744534,
90
+ 'token': 4331,
91
+ 'token_str': 'politics'},
92
+ {'sequence': '[CLS] dont play punk leh [SEP]',
93
+ 'score': 0.005583590362221003,
94
+ 'token': 7196,
95
+ 'token_str': 'punk'},
96
+ {'sequence': '[CLS] dont play dirty leh [SEP]',
97
+ 'score': 0.0025784350000321865,
98
+ 'token': 6530,
99
+ 'token_str': 'dirty'},
100
+ {'sequence': '[CLS] dont play cheat leh [SEP]',
101
+ 'score': 0.0025066907983273268,
102
+ 'token': 21910,
103
+ 'token_str': 'cheat'}]
104
+
105
+ >>> nlp("catch no [MASK]")
106
+
107
+ [{'sequence': '[CLS] catch no ball [SEP]',
108
+ 'score': 0.7922210693359375,
109
+ 'token': 3608,
110
+ 'token_str': 'ball'},
111
+ {'sequence': '[CLS] catch no balls [SEP]',
112
+ 'score': 0.20503675937652588,
113
+ 'token': 7395,
114
+ 'token_str': 'balls'},
115
+ {'sequence': '[CLS] catch no tail [SEP]',
116
+ 'score': 0.0006608376861549914,
117
+ 'token': 5725,
118
+ 'token_str': 'tail'},
119
+ {'sequence': '[CLS] catch no talent [SEP]',
120
+ 'score': 0.0002158183924620971,
121
+ 'token': 5848,
122
+ 'token_str': 'talent'},
123
+ {'sequence': '[CLS] catch no prisoners [SEP]',
124
+ 'score': 5.3481446229852736e-05,
125
+ 'token': 5895,
126
+ 'token_str': 'prisoners'}]
127
+
128
+ >>> nlp("confirm plus [MASK]")
129
+
130
+ [{'sequence': '[CLS] confirm plus chop [SEP]',
131
+ 'score': 0.992355227470398,
132
+ 'token': 24494,
133
+ 'token_str': 'chop'},
134
+ {'sequence': '[CLS] confirm plus one [SEP]',
135
+ 'score': 0.0037301010452210903,
136
+ 'token': 2028,
137
+ 'token_str': 'one'},
138
+ {'sequence': '[CLS] confirm plus minus [SEP]',
139
+ 'score': 0.0014284878270700574,
140
+ 'token': 15718,
141
+ 'token_str': 'minus'},
142
+ {'sequence': '[CLS] confirm plus 1 [SEP]',
143
+ 'score': 0.0011354683665558696,
144
+ 'token': 1015,
145
+ 'token_str': '1'},
146
+ {'sequence': '[CLS] confirm plus chopped [SEP]',
147
+ 'score': 0.0003804611915256828,
148
+ 'token': 24881,
149
+ 'token_str': 'chopped'}]
150
+
151
+ >>> nlp("die [MASK] must try")
152
+
153
+ [{'sequence': '[CLS] die die must try [SEP]',
154
+ 'score': 0.9552758932113647,
155
+ 'token': 3280,
156
+ 'token_str': 'die'},
157
+ {'sequence': '[CLS] die also must try [SEP]',
158
+ 'score': 0.03644804656505585,
159
+ 'token': 2036,
160
+ 'token_str': 'also'},
161
+ {'sequence': '[CLS] die liao must try [SEP]',
162
+ 'score': 0.003282855963334441,
163
+ 'token': 727,
164
+ 'token_str': 'liao'},
165
+ {'sequence': '[CLS] die already must try [SEP]',
166
+ 'score': 0.0004937972989864647,
167
+ 'token': 2525,
168
+ 'token_str': 'already'},
169
+ {'sequence': '[CLS] die hard must try [SEP]',
170
+ 'score': 0.0003659659414552152,
171
+ 'token': 2524,
172
+ 'token_str': 'hard'}]
173
+
174
+ ```
175
+
176
+ Here is how to use this model to get the features of a given text in PyTorch:
177
+ ```python
178
+ from transformers import BertTokenizer, BertModel
179
+ tokenizer = BertTokenizer.from_pretrained('zanelim/singbert')
180
+ model = BertModel.from_pretrained("zanelim/singbert")
181
+ text = "Replace me by any text you'd like."
182
+ encoded_input = tokenizer(text, return_tensors='pt')
183
+ output = model(**encoded_input)
184
+ ```
185
+
186
+ and in TensorFlow:
187
+ ```python
188
+ from transformers import BertTokenizer, TFBertModel
189
+ tokenizer = BertTokenizer.from_pretrained("zanelim/singbert")
190
+ model = TFBertModel.from_pretrained("zanelim/singbert")
191
+ text = "Replace me by any text you'd like."
192
+ encoded_input = tokenizer(text, return_tensors='tf')
193
+ output = model(encoded_input)
194
+ ```
195
+
196
+ #### Limitations and bias
197
+ This model was finetuned on colloquial Singlish and Manglish corpus, hence it is best applied on downstream tasks involving the main
198
+ constituent languages- english, mandarin, malay. Also, as the training data is mainly from forums, beware of existing inherent bias.
199
+
200
+ ## Training data
201
+ Colloquial singlish and manglish (both are a mixture of English, Mandarin, Tamil, Malay, and other local dialects like Hokkien, Cantonese or Teochew)
202
+ corpus. The corpus is collected from subreddits- `r/singapore` and `r/malaysia`, and forums such as `hardwarezone`.
203
+
204
+ ## Training procedure
205
+
206
+ Initialized with [bert base uncased](https://github.com/google-research/bert#pre-trained-models) vocab and checkpoints (pre-trained weights).
207
+ Top 1000 custom vocab tokens (non-overlapped with original bert vocab) were further extracted from training data and filled into unused tokens in original bert vocab.
208
+
209
+ Pre-training was further finetuned on training data with the following hyperparameters
210
+ * train_batch_size: 512
211
+ * max_seq_length: 128
212
+ * num_train_steps: 300000
213
+ * num_warmup_steps: 5000
214
+ * learning_rate: 2e-5
215
+ * hardware: TPU v3-8