Links to article
Browse files
README.md
CHANGED
@@ -11,6 +11,8 @@ tags:
|
|
11 |
|
12 |
This is a [sentence-transformers](https://www.SBERT.net) model: It maps Swedish sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. This model is a bilingual Swedish-English model trained according to instructions in the paper [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/pdf/2004.09813.pdf) and the [documentation](https://www.sbert.net/examples/training/multilingual/README.html) accompanying its companion python package. We have used the strongest available pretrained English Bi-Encoder ([paraphrase-mpnet-base-v2](https://www.sbert.net/docs/pretrained_models.html#sentence-embedding-models)) as a teacher model, and the pretrained Swedish [KB-BERT](https://huggingface.co/KB/bert-base-swedish-cased) as the student model.
|
13 |
|
|
|
|
|
14 |
<!--- Describe your model here -->
|
15 |
|
16 |
## Usage (Sentence-Transformers)
|
@@ -76,7 +78,7 @@ print(sentence_embeddings)
|
|
76 |
|
77 |
<!--- Describe how your model was evaluated -->
|
78 |
|
79 |
-
The model was evaluated on [SweParaphrase v1.0](https://spraakbanken.gu.se/en/resources/sweparaphrase)
|
80 |
|
81 |
The following code snippet can be used to reproduce the above results:
|
82 |
|
@@ -119,12 +121,15 @@ sentence_pair_scores = cosine_scores.diag()
|
|
119 |
df["model_score"] = sentence_pair_scores.cpu().tolist()
|
120 |
print(df[["score", "model_score"]].corr(method="spearman"))
|
121 |
print(df[["score", "model_score"]].corr(method="pearson"))
|
122 |
-
```
|
123 |
|
|
|
|
|
124 |
|
125 |
## Training
|
126 |
|
127 |
-
|
|
|
|
|
128 |
|
129 |
The model was trained with the parameters:
|
130 |
|
@@ -174,6 +179,8 @@ SentenceTransformer(
|
|
174 |
<!--- Describe where people can find more information -->
|
175 |
This model was trained by KBLab, a data lab at the National Library of Sweden.
|
176 |
|
|
|
|
|
177 |
## Acknowledgements
|
178 |
|
179 |
-
We gratefully acknowledge the HPC RIVR consortium (www.hpc-rivr.si) and EuroHPC JU (eurohpc-ju.europa.eu) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (www.izum.si).
|
|
|
11 |
|
12 |
This is a [sentence-transformers](https://www.SBERT.net) model: It maps Swedish sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. This model is a bilingual Swedish-English model trained according to instructions in the paper [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/pdf/2004.09813.pdf) and the [documentation](https://www.sbert.net/examples/training/multilingual/README.html) accompanying its companion python package. We have used the strongest available pretrained English Bi-Encoder ([paraphrase-mpnet-base-v2](https://www.sbert.net/docs/pretrained_models.html#sentence-embedding-models)) as a teacher model, and the pretrained Swedish [KB-BERT](https://huggingface.co/KB/bert-base-swedish-cased) as the student model.
|
13 |
|
14 |
+
A more detailed description of the model can be found in an article we published on the [KBLab blog](https://kb-labb.github.io/posts/2021-08-23-a-swedish-sentence-transformer/).
|
15 |
+
|
16 |
<!--- Describe your model here -->
|
17 |
|
18 |
## Usage (Sentence-Transformers)
|
|
|
78 |
|
79 |
<!--- Describe how your model was evaluated -->
|
80 |
|
81 |
+
The model was primarily evaluated on [SweParaphrase v1.0](https://spraakbanken.gu.se/en/resources/sweparaphrase). This test set is part of [SuperLim](https://spraakbanken.gu.se/en/resources/superlim) -- a Swedish evaluation suite for natural langage understanding tasks. We calculated Pearson and Spearman correlation between predicted model similarity scores and the human similarity score labels. The model achieved a Pearson correlation coefficient of **0.918** and a Spearman's rank correlation coefficient of **0.911**.
|
82 |
|
83 |
The following code snippet can be used to reproduce the above results:
|
84 |
|
|
|
121 |
df["model_score"] = sentence_pair_scores.cpu().tolist()
|
122 |
print(df[["score", "model_score"]].corr(method="spearman"))
|
123 |
print(df[["score", "model_score"]].corr(method="pearson"))
|
|
|
124 |
|
125 |
+
Examples how to evaluate the model on other test sets of the SuperLim suites can be found on the following links: [evaluate_faq.py](https://github.com/kb-labb/swedish-sbert/blob/main/evaluate_faq.py) (Swedish FAQ), [evaluate_swesat.py](https://github.com/kb-labb/swedish-sbert/blob/main/evaluate_swesat.py) (SweSAT synonyms), [evaluate_supersim.py](https://github.com/kb-labb/swedish-sbert/blob/main/evaluate_supersim.py) (SuperSim).
|
126 |
+
```
|
127 |
|
128 |
## Training
|
129 |
|
130 |
+
An article with more details on data and the model can be found on the [KBLab blog](https://kb-labb.github.io/posts/2021-08-23-a-swedish-sentence-transformer/).
|
131 |
+
|
132 |
+
Around 14.6 million sentences from English-Swedish parallel corpuses were used to train the model. Data was sourced from the [Open Parallel Corpus](https://opus.nlpl.eu/) (OPUS) and downloaded via the python package [opustools](https://pypi.org/project/opustools/). Datasets used were: JW300, Europarl, EUbookshop, EMEA, TED2020, Tatoeba and OpenSubtitles.
|
133 |
|
134 |
The model was trained with the parameters:
|
135 |
|
|
|
179 |
<!--- Describe where people can find more information -->
|
180 |
This model was trained by KBLab, a data lab at the National Library of Sweden.
|
181 |
|
182 |
+
You can cite the article on our blog: https://kb-labb.github.io/posts/2021-08-23-a-swedish-sentence-transformer/ .
|
183 |
+
|
184 |
## Acknowledgements
|
185 |
|
186 |
+
We gratefully acknowledge the HPC RIVR consortium (www.hpc-rivr.si) and EuroHPC JU ([eurohpc-ju.europa.eu](eurohpc-ju.europa.eu)) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (www.izum.si).
|