Go Inoue
commited on
Commit
•
9706a73
1
Parent(s):
fd39d7b
Fix typo
Browse files
README.md
CHANGED
@@ -3,10 +3,10 @@ language:
|
|
3 |
- ar
|
4 |
license: apache-2.0
|
5 |
widget:
|
6 |
-
- text: "
|
7 |
---
|
8 |
|
9 |
-
#
|
10 |
|
11 |
## Model description
|
12 |
|
@@ -18,7 +18,7 @@ We release eight models with different sizes and variants as follows:
|
|
18 |
|-|-|:-:|-:|-:|
|
19 |
||`bert-base-camelbert-mix`|CA,DA,MSA|167GB|17.3B|
|
20 |
||`bert-base-camelbert-ca`|CA|6GB|847M|
|
21 |
-
|
22 |
||`bert-base-camelbert-msa`|MSA|107GB|12.6B|
|
23 |
||`bert-base-camelbert-msa-half`|MSA|53GB|6.3B|
|
24 |
||`bert-base-camelbert-msa-quarter`|MSA|27GB|3.1B|
|
@@ -37,27 +37,27 @@ You can use this model directly with a pipeline for masked language modeling:
|
|
37 |
```python
|
38 |
>>> from transformers import pipeline
|
39 |
>>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-camelbert-da')
|
40 |
-
>>> unmasker("
|
41 |
-
[{'sequence': '[CLS]
|
42 |
'score': 0.062508225440979,
|
43 |
'token': 18,
|
44 |
'token_str': '.'},
|
45 |
-
{'sequence': '[CLS]
|
46 |
'score': 0.033172328025102615,
|
47 |
'token': 4295,
|
48 |
-
'token_str': '
|
49 |
-
{'sequence': '[CLS]
|
50 |
'score': 0.029575437307357788,
|
51 |
'token': 3696,
|
52 |
-
'token_str': '
|
53 |
-
{'sequence': '[CLS]
|
54 |
'score': 0.02724040113389492,
|
55 |
'token': 11449,
|
56 |
-
'token_str': '
|
57 |
-
{'sequence': '[CLS]
|
58 |
'score': 0.01564178802073002,
|
59 |
'token': 3088,
|
60 |
-
'token_str': '
|
61 |
```
|
62 |
|
63 |
Here is how to use this model to get the features of a given text in PyTorch:
|
@@ -65,7 +65,7 @@ Here is how to use this model to get the features of a given text in PyTorch:
|
|
65 |
from transformers import AutoTokenizer, AutoModel
|
66 |
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-camelbert-da')
|
67 |
model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-camelbert-da')
|
68 |
-
text = "
|
69 |
encoded_input = tokenizer(text, return_tensors='pt')
|
70 |
output = model(**encoded_input)
|
71 |
```
|
@@ -75,7 +75,7 @@ and in TensorFlow:
|
|
75 |
from transformers import AutoTokenizer, TFAutoModel
|
76 |
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-camelbert-da')
|
77 |
model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-camelbert-da')
|
78 |
-
text = "
|
79 |
encoded_input = tokenizer(text, return_tensors='tf')
|
80 |
output = model(encoded_input)
|
81 |
```
|
|
|
3 |
- ar
|
4 |
license: apache-2.0
|
5 |
widget:
|
6 |
+
- text: "الهدف من الحياة هو [MASK] ."
|
7 |
---
|
8 |
|
9 |
+
# CAMeLBERT-DA
|
10 |
|
11 |
## Model description
|
12 |
|
|
|
18 |
|-|-|:-:|-:|-:|
|
19 |
||`bert-base-camelbert-mix`|CA,DA,MSA|167GB|17.3B|
|
20 |
||`bert-base-camelbert-ca`|CA|6GB|847M|
|
21 |
+
|✔|`bert-base-camelbert-da`|DA|54GB|5.8B|
|
22 |
||`bert-base-camelbert-msa`|MSA|107GB|12.6B|
|
23 |
||`bert-base-camelbert-msa-half`|MSA|53GB|6.3B|
|
24 |
||`bert-base-camelbert-msa-quarter`|MSA|27GB|3.1B|
|
|
|
37 |
```python
|
38 |
>>> from transformers import pipeline
|
39 |
>>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-camelbert-da')
|
40 |
+
>>> unmasker("الهدف من الحياة هو [MASK] .")
|
41 |
+
[{'sequence': '[CLS] الهدف من الحياة هو.. [SEP]',
|
42 |
'score': 0.062508225440979,
|
43 |
'token': 18,
|
44 |
'token_str': '.'},
|
45 |
+
{'sequence': '[CLS] الهدف من الحياة هو الموت. [SEP]',
|
46 |
'score': 0.033172328025102615,
|
47 |
'token': 4295,
|
48 |
+
'token_str': 'الموت'},
|
49 |
+
{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]',
|
50 |
'score': 0.029575437307357788,
|
51 |
'token': 3696,
|
52 |
+
'token_str': 'الحياة'},
|
53 |
+
{'sequence': '[CLS] الهدف من الحياة هو الرحيل. [SEP]',
|
54 |
'score': 0.02724040113389492,
|
55 |
'token': 11449,
|
56 |
+
'token_str': 'الرحيل'},
|
57 |
+
{'sequence': '[CLS] الهدف من الحياة هو الحب. [SEP]',
|
58 |
'score': 0.01564178802073002,
|
59 |
'token': 3088,
|
60 |
+
'token_str': 'الحب'}]
|
61 |
```
|
62 |
|
63 |
Here is how to use this model to get the features of a given text in PyTorch:
|
|
|
65 |
from transformers import AutoTokenizer, AutoModel
|
66 |
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-camelbert-da')
|
67 |
model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-camelbert-da')
|
68 |
+
text = "مرحبا يا عالم."
|
69 |
encoded_input = tokenizer(text, return_tensors='pt')
|
70 |
output = model(**encoded_input)
|
71 |
```
|
|
|
75 |
from transformers import AutoTokenizer, TFAutoModel
|
76 |
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-camelbert-da')
|
77 |
model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-camelbert-da')
|
78 |
+
text = "مرحبا يا عالم."
|
79 |
encoded_input = tokenizer(text, return_tensors='tf')
|
80 |
output = model(encoded_input)
|
81 |
```
|