Commit
·
7569437
1
Parent(s):
ba76e17
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,105 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
pipeline_tag: sentence-similarity
|
3 |
+
language: fr
|
4 |
+
datasets:
|
5 |
+
- stsb_multi_mt
|
6 |
+
tags:
|
7 |
+
- Text
|
8 |
+
- Sentence Similarity
|
9 |
+
- Sentence-Embedding
|
10 |
+
- camembert-base
|
11 |
+
license: apache-2.0
|
12 |
+
model-index:
|
13 |
+
- name: CrossEncoder-camembert-large by Van Tuan DANG
|
14 |
+
results:
|
15 |
+
- task:
|
16 |
+
name: Sentence-Embedding
|
17 |
+
type: Text Similarity
|
18 |
+
dataset:
|
19 |
+
name: Text Similarity fr
|
20 |
+
type: stsb_multi_mt
|
21 |
+
args: fr
|
22 |
+
metrics:
|
23 |
+
- name: Test Pearson correlation coefficient
|
24 |
+
type: Pearson_correlation_coefficient
|
25 |
+
value: 90.34
|
26 |
---
|
27 |
+
|
28 |
+
## Model
|
29 |
+
|
30 |
+
Cross-Encoder Model for sentence-similarity
|
31 |
+
|
32 |
+
This model was is an improvement over the [dangvantuan/CrossEncoder-camembert-large](https://huggingface.co/dangvantuan/CrossEncoder-camembert-large) offering greater robustness and better performance
|
33 |
+
|
34 |
+
## Training Data
|
35 |
+
This model was trained on the [STS benchmark dataset](https://huggingface.co/datasets/stsb_multi_mt/viewer/fr/train) and has been combined with [Augmented SBERT](https://aclanthology.org/2021.naacl-main.28.pdf). The model benefits from Pair Sampling Strategies using two models: [CrossEncoder-camembert-large](https://huggingface.co/dangvantuan/CrossEncoder-camembert-large) and [dangvantuan/sentence-camembert-large](https://huggingface.co/dangvantuan/sentence-camembert-large). The model will predict a score between 0 and 1 how for the semantic similarity of two sentences.
|
36 |
+
|
37 |
+
|
38 |
+
## Usage (Sentence-Transformers)
|
39 |
+
|
40 |
+
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
|
41 |
+
|
42 |
+
```
|
43 |
+
pip install -U sentence-transformers
|
44 |
+
```
|
45 |
+
|
46 |
+
Then you can use the model like this:
|
47 |
+
|
48 |
+
```python
|
49 |
+
from sentence_transformers import CrossEncoder
|
50 |
+
model = CrossEncoder('Lajavaness/CrossEncoder-camembert-large', max_length=512)
|
51 |
+
scores = model.predict([('Un avion est en train de décoller.', "Un homme joue d'une grande flûte."), ("Un homme étale du fromage râpé sur une pizza.", "Une personne jette un chat au plafond") ])
|
52 |
+
|
53 |
+
```
|
54 |
+
## Evaluation
|
55 |
+
The model can be evaluated as follows on the French test data of stsb.
|
56 |
+
```python
|
57 |
+
from sentence_transformers.readers import InputExample
|
58 |
+
from sentence_transformers.cross_encoder.evaluation import CECorrelationEvaluator
|
59 |
+
from datasets import load_dataset
|
60 |
+
def convert_dataset(dataset):
|
61 |
+
dataset_samples=[]
|
62 |
+
for df in dataset:
|
63 |
+
score = float(df['similarity_score'])/5.0 # Normalize score to range 0 ... 1
|
64 |
+
inp_example = InputExample(texts=[df['sentence1'],
|
65 |
+
df['sentence2']], label=score)
|
66 |
+
dataset_samples.append(inp_example)
|
67 |
+
return dataset_samples
|
68 |
+
|
69 |
+
# Loading the dataset for evaluation
|
70 |
+
df_dev = load_dataset("stsb_multi_mt", name="fr", split="dev")
|
71 |
+
df_test = load_dataset("stsb_multi_mt", name="fr", split="test")
|
72 |
+
|
73 |
+
# Convert the dataset for evaluation
|
74 |
+
|
75 |
+
# For Dev set:
|
76 |
+
dev_samples = convert_dataset(df_dev)
|
77 |
+
val_evaluator = CECorrelationEvaluator.from_input_examples(dev_samples, name='sts-dev')
|
78 |
+
val_evaluator(model, output_path="./")
|
79 |
+
|
80 |
+
# For Test set, the Pearson and Spearman correlation are evaluated on many different benchmark datasets:
|
81 |
+
|
82 |
+
test_samples = convert_dataset(df_test)
|
83 |
+
test_evaluator = CECorrelationEvaluator.from_input_examples(test_samples, name='sts-test')
|
84 |
+
test_evaluator(models, output_path="./")
|
85 |
+
```
|
86 |
+
**Test Result**:
|
87 |
+
The performance is measured using Pearson and Spearman correlation:
|
88 |
+
- On dev
|
89 |
+
| Model | Pearson correlation | Spearman correlation | #params |
|
90 |
+
| ------------- | ------------- | ------------- |------------- |
|
91 |
+
| [Lajavaness/CrossEncoder-camembert-large](https://huggingface.co/dangvantuan/CrossEncoder-camembert-large)| 90.34 |90.15 | 336M |
|
92 |
+
| [dangvantuan/CrossEncoder-camembert-large](https://huggingface.co/dangvantuan/CrossEncoder-camembert-large)| 90.11 |90.01 | 336M |
|
93 |
+
|
94 |
+
- On test:
|
95 |
+
**Pearson score**
|
96 |
+
| Model | STS-B | STS12-fr | STS13-fr | STS14-fr | STS15-fr | STS16-fr | SICK-fr
|
97 |
+
|---------------------------------------|--------|----------|----------|----------|----------|----------|---------|-----------|
|
98 |
+
| [Lajavaness/CrossEncoder-camembert-large](https://huggingface.co/dangvantuan/CrossEncoder-camembert-large) | 0.8863 | 0.9076 | 0.8824 | 0.9022 | 0.9223 | 0.8231 | 0.8461 |
|
99 |
+
| [dangvantuan/CrossEncoder-camembert-large](https://huggingface.co/dangvantuan/CrossEncoder-camembert-large) | 0.8816 | 0.9012 | 0.8836 | 0.8986 | 0.9204 | 0.8201 | 0.8423 |
|
100 |
+
|
101 |
+
**Spearman score**
|
102 |
+
| Model | STS-B | STS12-fr | STS13-fr | STS14-fr | STS15-fr | STS16-fr | SICK-fr
|
103 |
+
|---------------------------------------|--------|----------|----------|----------|----------|----------|---------|-----------|
|
104 |
+
| [Lajavaness/CrossEncoder-camembert-large](https://huggingface.co/dangvantuan/CrossEncoder-camembert-large) | 0.8803 | 0.8487 | 0.8788 | 0.8910 | 0.9216 | 0.8250 | 0.8078 |
|
105 |
+
| [dangvantuan/CrossEncoder-camembert-large](https://huggingface.co/dangvantuan/CrossEncoder-camembert-large) | 0.8757 | 0.8424 | 0.8801 | 0.8862 | 0.9199 | 0.8216 | 0.8038 |
|