nekoboost commited on
Commit
42b3411
1 Parent(s): 6ab8ee6

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -0
README.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ language:
4
+ - cs
5
+ - en
6
+ pipeline_tag: sentence-similarity
7
+ ---
8
+
9
+ ## Multilingual distillation
10
+
11
+ Dist-MPNet-Paracrawl is a BERT-small model [distilled](https://arxiv.org/abs/2004.09813) from the [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) model, using parallel cs-en dataset [CzEng](https://ufal.mff.cuni.cz/czeng) for training.
12
+
13
+ This model was created at Seznam.cz as part of a project to create high-quality small Czech semantic embedding models. These models perform well across various natural language processing tasks, including similarity search, retrieval, clustering, and classification. For further details or evaluation results, please visit the associated [paper]() or [GitHub repository]((https://github.com/seznam/czech-semantic-embedding-models)).
14
+
15
+ ## How to Use
16
+
17
+ You can load and use the model like this:
18
+
19
+ ```python
20
+ import torch
21
+ from transformers import AutoModel, AutoTokenizer
22
+
23
+ model_name = "Seznam/retromae-small-cs" # Hugging Face link
24
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
25
+ model = AutoModel.from_pretrained(model_name)
26
+
27
+ input_texts = [
28
+ "Dnes je výborné počasí na procházku po parku.",
29
+ "Večer si oblíbím dobrý film a uvařím si čaj."
30
+ ]
31
+
32
+ # Tokenize the input texts
33
+ batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
34
+
35
+ outputs = model(**batch_dict)
36
+ embeddings = outputs.last_hidden_state[:, 0] # Extract CLS token embeddings
37
+
38
+ similarity = torch.nn.functional.cosine_similarity(embeddings[0], embeddings[1], dim=0)
39
+ ```