---
language:
- bn
- gu
- hi
- kn
- ml
- mr
- ta
- te
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:112855
- loss:MSELoss
- indic
base_model: aloobun/d-mxbai-L8-embed
widget:
- source_sentence: (Laughter) And I've already hinted what that something is.
sentences:
- >-
हे मजेशीर आहे, मी ट्विटर आणि फेसबुकवर विचारले असे की, "तुम्ही अगतिकतेची
व्याख्या कशी कराल? तुम्हाला कशामुळे अगतिक वाटते?"
- (हशा) आणि मी आधीच थोडीशी कल्पना दिली आहे ते काय करावे लागेल त्याबद्दल.
- >-
तर मी जेव्हा ह्या दालनात नजर फिरवितो माणसांवर, ज्यांनी मिळवलंय, किंवा
मिळवायच्या मार्गावर आहेत, लक्षणीय यश, मी त्यांना हे लक्षात ठेवायला सांगतो:
वाट पाहू नका.
- source_sentence: I no longer try to be right; I choose to be happy.
sentences:
- এটি একটি অসাধারণ ঘনটা এবং এক অদ্ভুত অনুধাবন।
- কেন এই ধারণাটা ছড়িয়ে গেল?
- আমি সুখে থাকাকেই বেছে নিয়েছি।
- source_sentence: >-
And if tempers are still too high, then they send someone off to visit some
relatives, as a cooling-off period.
sentences:
- >-
और यदि तब भी गुस्सा शांत न हो, तो वो किसी को अपने रिश्तेदारों से मिलने भेज
देते हैं शांत होने के लिये।
- >-
और वे तुम्हे गलत समय पर बाधित करते रहते है जब तुम अच मैं कुच करने कि कोशिश
कर रहे होते हो जिसके लिये वे तुम्हे भुगतान करते है वे तुमको बधित करते हैं।
- >-
इस प्रयोग का आखिरी सवाल था: कैसे आप अपने जीवन से दूसरों पर सकारात्मक प्रभाव
डालेंगे?
- source_sentence: I see, I see one way in the back.
sentences:
- ಸ್ಟಾಂಡರ್ಡ್ ಚಾರ್ಟರ್ಡ್ 140 ಮಿಲಿಯನ್ ತಂದಿದೆ.
- ನಗರಗಳಲ್ಲಂತೂ ಶೇಕಡಾ ೮೦ರಷ್ಟು ಮಕ್ಕಳು ಕಾಲೇಜಿಗೆ ಹೋಗುತ್ತಾರೆ.
- ಇನ್ನು ಯಾರಾದರೂ? ನನಗೆ ಕಾಣಿಸುತ್ತಿದೆ, ಅಲ್ಲಿ..ಹಿಂದೆ.. ಒಂದು ಕೈ ಕಾಣಿಸುತ್ತಿದೆ.
- source_sentence: Whenever it rains, magically, mushrooms appear overnight.
sentences:
- ಈ ವಿಷಯವನ್ನು ಅವರು ಮುಚ್ಚಿಟ್ಟರು, ಆದರೆ ಇತರರಿಗೆ ಬೇಗನೇ ತಿಳಿಯಿತು.
- >-
ಮಳೆಯಾದಾಗೆಲ್ಲ, ಮನಮೋಹಕವಾಗಿ, ಅಣಬೆಗಳು ಒಂದು ರಾತ್ರಿಯ ವೇಳೆಯಲ್ಲಿ
ಕಾಣಿಸಿಕೊಳ್ಳುತ್ತವೆ.
- 'ಪ್ರೇಕ್ಷಕ: 1947 ಎಬಿ: 1947, ಯಾವ ತಿಂಗಳು?'
datasets:
- aloobun/indic-parallel-sentences-talks
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- negative_mse
- src2trg_accuracy
- trg2src_accuracy
- mean_accuracy
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on aloobun/d-mxbai-L8-embed
results:
- task:
type: knowledge-distillation
name: Knowledge Distillation
dataset:
name: en mr
type: en-mr
metrics:
- type: negative_mse
value: -14.405468106269836
name: Negative Mse
- task:
type: translation
name: Translation
dataset:
name: en mr
type: en-mr
metrics:
- type: src2trg_accuracy
value: 0.324
name: Src2Trg Accuracy
- type: trg2src_accuracy
value: 0.174
name: Trg2Src Accuracy
- type: mean_accuracy
value: 0.249
name: Mean Accuracy
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts17 en mr test
type: sts17-en-mr-test
metrics:
- type: pearson_cosine
value: 0.21811289256702704
name: Pearson Cosine
- type: spearman_cosine
value: 0.22533360893418355
name: Spearman Cosine
- task:
type: knowledge-distillation
name: Knowledge Distillation
dataset:
name: en hi
type: en-hi
metrics:
- type: negative_mse
value: -14.047445356845856
name: Negative Mse
- task:
type: translation
name: Translation
dataset:
name: en hi
type: en-hi
metrics:
- type: src2trg_accuracy
value: 0.465
name: Src2Trg Accuracy
- type: trg2src_accuracy
value: 0.244
name: Trg2Src Accuracy
- type: mean_accuracy
value: 0.35450000000000004
name: Mean Accuracy
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts17 en hi test
type: sts17-en-hi-test
metrics:
- type: pearson_cosine
value: 0.08483694965794362
name: Pearson Cosine
- type: spearman_cosine
value: 0.13404452326754046
name: Spearman Cosine
- task:
type: knowledge-distillation
name: Knowledge Distillation
dataset:
name: en bn
type: en-bn
metrics:
- type: negative_mse
value: -15.71638137102127
name: Negative Mse
- task:
type: translation
name: Translation
dataset:
name: en bn
type: en-bn
metrics:
- type: src2trg_accuracy
value: 0.242
name: Src2Trg Accuracy
- type: trg2src_accuracy
value: 0.081
name: Trg2Src Accuracy
- type: mean_accuracy
value: 0.1615
name: Mean Accuracy
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts17 en bn test
type: sts17-en-bn-test
metrics:
- type: pearson_cosine
value: 0.14785129719314127
name: Pearson Cosine
- type: spearman_cosine
value: 0.1830075106480045
name: Spearman Cosine
- task:
type: knowledge-distillation
name: Knowledge Distillation
dataset:
name: en gu
type: en-gu
metrics:
- type: negative_mse
value: -16.396714746952057
name: Negative Mse
- task:
type: translation
name: Translation
dataset:
name: en gu
type: en-gu
metrics:
- type: src2trg_accuracy
value: 0.04
name: Src2Trg Accuracy
- type: trg2src_accuracy
value: 0.017
name: Trg2Src Accuracy
- type: mean_accuracy
value: 0.0285
name: Mean Accuracy
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts17 en gu test
type: sts17-en-gu-test
metrics:
- type: pearson_cosine
value: 0.08746107622701571
name: Pearson Cosine
- type: spearman_cosine
value: 0.11731440991672663
name: Spearman Cosine
- task:
type: knowledge-distillation
name: Knowledge Distillation
dataset:
name: en ta
type: en-ta
metrics:
- type: negative_mse
value: -16.221003234386444
name: Negative Mse
- task:
type: translation
name: Translation
dataset:
name: en ta
type: en-ta
metrics:
- type: src2trg_accuracy
value: 0.102
name: Src2Trg Accuracy
- type: trg2src_accuracy
value: 0.04
name: Trg2Src Accuracy
- type: mean_accuracy
value: 0.071
name: Mean Accuracy
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts17 en ta test
type: sts17-en-ta-test
metrics:
- type: pearson_cosine
value: -0.02863897450386144
name: Pearson Cosine
- type: spearman_cosine
value: -0.039475796340022885
name: Spearman Cosine
- task:
type: knowledge-distillation
name: Knowledge Distillation
dataset:
name: en kn
type: en-kn
metrics:
- type: negative_mse
value: -16.703946888446808
name: Negative Mse
- task:
type: translation
name: Translation
dataset:
name: en kn
type: en-kn
metrics:
- type: src2trg_accuracy
value: 0.117
name: Src2Trg Accuracy
- type: trg2src_accuracy
value: 0.068
name: Trg2Src Accuracy
- type: mean_accuracy
value: 0.0925
name: Mean Accuracy
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts17 en kn test
type: sts17-en-kn-test
metrics:
- type: pearson_cosine
value: 0.04635550247380243
name: Pearson Cosine
- type: spearman_cosine
value: 0.020029816999255046
name: Spearman Cosine
- task:
type: knowledge-distillation
name: Knowledge Distillation
dataset:
name: en te
type: en-te
metrics:
- type: negative_mse
value: -17.04743355512619
name: Negative Mse
- task:
type: translation
name: Translation
dataset:
name: en te
type: en-te
metrics:
- type: src2trg_accuracy
value: 0.075
name: Src2Trg Accuracy
- type: trg2src_accuracy
value: 0.025
name: Trg2Src Accuracy
- type: mean_accuracy
value: 0.05
name: Mean Accuracy
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts17 en te test
type: sts17-en-te-test
metrics:
- type: pearson_cosine
value: 0.12394140653755585
name: Pearson Cosine
- type: spearman_cosine
value: 0.19417699598729235
name: Spearman Cosine
- task:
type: knowledge-distillation
name: Knowledge Distillation
dataset:
name: en ml
type: en-ml
metrics:
- type: negative_mse
value: -17.274518311023712
name: Negative Mse
- task:
type: translation
name: Translation
dataset:
name: en ml
type: en-ml
metrics:
- type: src2trg_accuracy
value: 0.054
name: Src2Trg Accuracy
- type: trg2src_accuracy
value: 0.024
name: Trg2Src Accuracy
- type: mean_accuracy
value: 0.039
name: Mean Accuracy
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts17 en ml test
type: sts17-en-ml-test
metrics:
- type: pearson_cosine
value: 0.24086569602868083
name: Pearson Cosine
- type: spearman_cosine
value: 0.2717089217002832
name: Spearman Cosine
license: apache-2.0
---
# SentenceTransformer based on aloobun/d-mxbai-L8-embed
This is a [sentence-transformers](https://www.SBERT.net) model finetuned (to extend a monolingual model to several indic languages) from [aloobun/d-mxbai-L8-embed](https://huggingface.co/aloobun/d-mxbai-L8-embed) on the [en-mr](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks), [en-hi](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks), [en-bn](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks), [en-gu](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks), [en-ta](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks), [en-kn](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks), [en-te](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks) and [en-ml](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks) datasets. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
WIP
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [aloobun/d-mxbai-L8-embed](https://huggingface.co/aloobun/d-mxbai-L8-embed)
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- [en-mr](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks)
- [en-hi](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks)
- [en-bn](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks)
- [en-gu](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks)
- [en-ta](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks)
- [en-kn](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks)
- [en-te](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks)
- [en-ml](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks)
- **Languages:** bn, gu, hi, kn, ml, mr, ta, te
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Whenever it rains, magically, mushrooms appear overnight.',
'ಮಳೆಯಾದಾಗೆಲ್ಲ, ಮನಮೋಹಕವಾಗಿ, ಅಣಬೆಗಳು ಒಂದು ರಾತ್ರಿಯ ವೇಳೆಯಲ್ಲಿ ಕಾಣಿಸಿಕೊಳ್ಳುತ್ತವೆ.',
'ಈ ವಿಷಯವನ್ನು ಅವರು ಮುಚ್ಚಿಟ್ಟರು, ಆದರೆ ಇತರರಿಗೆ ಬೇಗನೇ ತಿಳಿಯಿತು.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
## Evaluation
### Metrics
#### Knowledge Distillation
* Datasets: `en-mr`, `en-hi`, `en-bn`, `en-gu`, `en-ta`, `en-kn`, `en-te` and `en-ml`
* Evaluated with [MSEEvaluator
](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.MSEEvaluator)
| Metric | en-mr | en-hi | en-bn | en-gu | en-ta | en-kn | en-te | en-ml |
|:-----------------|:-------------|:-------------|:-------------|:-------------|:------------|:-------------|:-------------|:-------------|
| **negative_mse** | **-14.4055** | **-14.0474** | **-15.7164** | **-16.3967** | **-16.221** | **-16.7039** | **-17.0474** | **-17.2745** |
#### Translation
* Datasets: `en-mr`, `en-hi`, `en-bn`, `en-gu`, `en-ta`, `en-kn`, `en-te` and `en-ml`
* Evaluated with [TranslationEvaluator
](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TranslationEvaluator)
| Metric | en-mr | en-hi | en-bn | en-gu | en-ta | en-kn | en-te | en-ml |
|:------------------|:----------|:-----------|:-----------|:-----------|:----------|:-----------|:---------|:----------|
| src2trg_accuracy | 0.324 | 0.465 | 0.242 | 0.04 | 0.102 | 0.117 | 0.075 | 0.054 |
| trg2src_accuracy | 0.174 | 0.244 | 0.081 | 0.017 | 0.04 | 0.068 | 0.025 | 0.024 |
| **mean_accuracy** | **0.249** | **0.3545** | **0.1615** | **0.0285** | **0.071** | **0.0925** | **0.05** | **0.039** |
#### Semantic Similarity
* Datasets: `sts17-en-mr-test`, `sts17-en-hi-test`, `sts17-en-bn-test`, `sts17-en-gu-test`, `sts17-en-ta-test`, `sts17-en-kn-test`, `sts17-en-te-test` and `sts17-en-ml-test`
* Evaluated with [EmbeddingSimilarityEvaluator
](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | sts17-en-mr-test | sts17-en-hi-test | sts17-en-bn-test | sts17-en-gu-test | sts17-en-ta-test | sts17-en-kn-test | sts17-en-te-test | sts17-en-ml-test |
|:--------------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|
| pearson_cosine | 0.2181 | 0.0848 | 0.1479 | 0.0875 | -0.0286 | 0.0464 | 0.1239 | 0.2409 |
| **spearman_cosine** | **0.2253** | **0.134** | **0.183** | **0.1173** | **-0.0395** | **0.02** | **0.1942** | **0.2717** |
## Training Details
### Training Datasets
#### en-mr
* Dataset: [en-mr](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks) at [604450b](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks/tree/604450baf780fd49257a8541c331e7bb5a90171d)
* Size: 21,756 training samples
* Columns: english
, non_english
, and label
* Approximate statistics based on the first 1000 samples:
| | english | non_english | label |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:--------------------------------------|
| type | string | string | list |
| details |
(Laughter) But in any case, that was more than 100 years ago.
| (हशा) पण काही झालेतरी ते होते १०० वर्षांपूर्वीचे.
| [-0.07917306572198868, 0.40863776206970215, 0.39547035098075867, 0.5217214822769165, -0.49311134219169617, ...]
|
| You'd think we might have grown up since then.
| तेव्हापासून आपण थोडे सुधारलो आहोत असे आपल्याला वाटते.
| [0.4867176115512848, -0.18171744048595428, 0.2339124083518982, 0.6620380878448486, 0.38678815960884094, ...]
|
| Now, a friend, an intelligent lapsed Jew, who, incidentally, observes the Sabbath for reasons of cultural solidarity, describes himself as a "tooth-fairy agnostic."
| आता एक मित्र, एक बुद्धिमान माजी-ज्यू, जो आपल्या संस्कृतीशी एकजूट दाखवण्यासाठी सबाथ पाळतो, स्वतःला दंतपरी अज्ञेय समजतो,
| [0.5010754466056824, -0.5600723028182983, 0.10560179501771927, -0.12681618332862854, -0.47324138879776, ...]
|
* Loss: [MSELoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss)
#### en-hi
* Dataset: [en-hi](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks) at [604450b](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks/tree/604450baf780fd49257a8541c331e7bb5a90171d)
* Size: 46,116 training samples
* Columns: english
, non_english
, and label
* Approximate statistics based on the first 1000 samples:
| | english | non_english | label |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:--------------------------------------|
| type | string | string | list |
| details | I've been living with HIV for the past four years.
| मैं पिछले चार साल से एच आइ वी के साथ रह रही हूँ
| [-0.004218218382447958, -0.9862065315246582, -1.1370266675949097, 1.2322533130645752, 0.4485853314399719, ...]
|
| My husband left me a year ago.
| मेरे पति ने एक साल पहले मुझको छोड़ दिया।
| [0.5797509551048279, -0.816991925239563, -0.28531885147094727, 0.5789890885353088, -0.9830609560012817, ...]
|
| I have two kids under the age of five.
| मेरे दो बच्चे हैं जो पाँच साल के भी नहीं हैं
| [-0.45990556478500366, 0.5632603168487549, -0.11529318988323212, 0.23170329630374908, -0.177066370844841, ...]
|
* Loss: [MSELoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss)
#### en-bn
* Dataset: [en-bn](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks) at [604450b](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks/tree/604450baf780fd49257a8541c331e7bb5a90171d)
* Size: 9,401 training samples
* Columns: english
, non_english
, and label
* Approximate statistics based on the first 1000 samples:
| | english | non_english | label |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:--------------------------------------|
| type | string | string | list |
| details | They're just practicing.
| তারা শুধুই অনুশীলন করছে।
| [0.03945370391011238, 0.9245128631591797, -0.12790781259536743, 0.5141751766204834, -0.6310628056526184, ...]
|
| One day they'll get here.
| একদিন হয়তো তারা এখানে আসতে পারবে।
| [-0.1937061846256256, 0.3374898135662079, -0.1676691621541977, 0.44971567392349243, 0.45998144149780273, ...]
|
| Now when I got out, I was diagnosed and I was given medications by a psychiatrist.
| তো, আমি যখন সেখান থেকে বের হলাম, তখন আমার রোগ নির্নয় করা হলো আর আমাকে ঔষুধপত্র দিলেন মনোরোগ চিকিৎসক
| [0.35454168915748596, -0.8726581335067749, -0.3993096947669983, 0.7934805750846863, -0.9255509376525879, ...]
|
* Loss: [MSELoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss)
#### en-gu
* Dataset: [en-gu](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks) at [604450b](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks/tree/604450baf780fd49257a8541c331e7bb5a90171d)
* Size: 14,805 training samples
* Columns: english
, non_english
, and label
* Approximate statistics based on the first 1000 samples:
| | english | non_english | label |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------|
| type | string | string | list |
| details | It's doing that based on the content inside the images.
| તે છબીઓની અંદર સામગ્રી પર આધારિત છે.
| [-0.10993346571922302, -0.16450753808021545, 0.46822917461395264, -0.2844494879245758, 0.869172990322113, ...]
|
| And that gets really exciting when you think about the richness of the semantic information a lot of images have.
| અને જ્યારે તમે સમૃદ્ધિ વિશે વિચારો છો ત્યારે તે ખરેખર આકર્ષક બને છે સિમેન્ટીક માહિતીની ઘણી બધી છબીઓ છે.
| [0.09240571409463882, -0.15316684544086456, 0.3019101619720459, -0.13211244344711304, 0.494329571723938, ...]
|
| Like when you do a web search for images, you type in phrases, and the text on the web page is carrying a lot of information about what that picture is of.
| જેમ તમે છબીઓ માટે વેબ શોધ કરો છો ત્યારે, તમે શબ્દસમૂહો લખો છો, અને વેબ પૃષ્ઠ પરનો ટેક્સ્ટ ઘણી બધી માહિતી લઈ રહી છે તે ચિત્ર શું છે તે વિશે
| [-0.17813900113105774, -0.5480513572692871, 0.2136719971895218, 0.1629626601934433, 0.7170971632003784, ...]
|
* Loss: [MSELoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss)
#### en-ta
* Dataset: [en-ta](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks) at [604450b](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks/tree/604450baf780fd49257a8541c331e7bb5a90171d)
* Size: 10,196 training samples
* Columns: english
, non_english
, and label
* Approximate statistics based on the first 1000 samples:
| | english | non_english | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------|
| type | string | string | list |
| details | Or perhaps an ordinary person like you or me?
| அல்லது சாதாரண மனிதனாக வாழ்ந்த நம்மைப் போன்றவரா?
| [0.03689160570502281, -0.021389128640294075, -0.6246430277824402, -0.20952607691287994, 0.054864056408405304, ...]
|
| We don't know.
| அது நமக்கு தெரியாது.
| [0.15699629485607147, -0.3969012498855591, -1.0549111366271973, -0.5266945958137512, -0.07592934370040894, ...]
|
| But the Indus people also left behind artifacts with writing on them.
| ஆனால் சிந்து சமவெளி மக்கள் எழுத்துகள் நிறைந்த கலைப்பொருட்களை நமக்கு விட்டுச் சென்றிருக்கின்றனர்.
| [-0.5243279337882996, 0.48444223403930664, -0.06693703681230545, -0.01581714116036892, -0.21955616772174835, ...]
|
* Loss: [MSELoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss)
#### en-kn
* Dataset: [en-kn](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks) at [604450b](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks/tree/604450baf780fd49257a8541c331e7bb5a90171d)
* Size: 1,266 training samples
* Columns: english
, non_english
, and label
* Approximate statistics based on the first 1000 samples:
| | english | non_english | label |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:--------------------------------------|
| type | string | string | list |
| details | Now, there is other origami in space.
| ಜಪಾನಿನ ಏರೋಸ್ಪೇಸ್ ಏಜೆನ್ಸಿಯು ಕಳುಹಿಸಿರುವ ಸೌರಪಟದ
| [-0.08880611509084702, 0.09982031583786011, 0.02458847127854824, 0.476515531539917, -0.021379221230745316, ...]
|
| Japan Aerospace [Exploration] Agency flew a solar sail, and you can see here that the sail expands out, and you can still see the fold lines.
| ಹಾಯಿಯು ಬಿಚ್ಚಿಕೊಳ್ಳುವುದನ್ನು ನೀವಿಲ್ಲಿ ನೋಡಬಹುದು. ಜೊತೆಗೆ ಮಡಿಕೆಯ ಗೆರೆಗಳನ್ನು ಇನ್ನೂ ನೋಡಬಹುದು. ಇಲ್ಲಿ ಬಗೆಹರಿಸಲಾದ ಸಮಸ್ಯೆ ಏನೆಂದರೆ, ಗುರಿ
| [-0.34035903215408325, 0.07759397476911545, 0.1922168731689453, -0.2632356286048889, 0.5736825466156006, ...]
|
| The problem that's being solved here is something that needs to be big and sheet-like at its destination, but needs to be small for the journey.
| ತಲುಪಿದಾಗ ಹಾಳೆಯಂತೆ ಹರಡಿಕೊಳ್ಳುವ, ಆದರೆ ಪ್ರಯಾಣದ ಸಮಯದಲ್ಲಿ ಪುಟ್ಟದಾಗಿ ಇರಬೇಕು ಎಂಬ ಸಮಸ್ಯೆ. ಇದು ಬಾಹ್ಯಾಕಾಶಕ್ಕೆ ಹೋಗಬೇಕಾದರಾಗಲೀ ಅಥವಾ
| [0.07517104595899582, -0.14021596312522888, 0.6983174681663513, 0.4898601472377777, -0.5877286195755005, ...]
|
* Loss: [MSELoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss)
#### en-te
* Dataset: [en-te](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks) at [604450b](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks/tree/604450baf780fd49257a8541c331e7bb5a90171d)
* Size: 4,284 training samples
* Columns: english
, non_english
, and label
* Approximate statistics based on the first 1000 samples:
| | english | non_english | label |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------|
| type | string | string | list |
| details | Friends, maybe one of you can tell me, what was I doing before becoming a children's rights activist?
| మిత్రులారా మీలో ఎవరోఒకరు నాతో చెప్పొచ్చు బాలల హక్కులకోసం పోరాడ్డానికి ముందు నేనేం చేసేవాడినో
| [-0.40020492672920227, -0.2989244759082794, -0.6533952951431274, 0.23902057111263275, 0.08480175584554672, ...]
|
| Does anybody know?
| ఎవరికైనా తెలుసా?
| [0.2367328256368637, -0.04550345987081528, -1.176395297050476, -0.44055190682411194, 0.13103251159191132, ...]
|
| No.
| తెలీదు
| [-0.06585437804460526, -0.36286693811416626, 0.11095129698514938, -0.14597812294960022, -0.03260830044746399, ...]
|
* Loss: [MSELoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss)
#### en-ml
* Dataset: [en-ml](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks) at [604450b](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks/tree/604450baf780fd49257a8541c331e7bb5a90171d)
* Size: 5,031 training samples
* Columns: english
, non_english
, and label
* Approximate statistics based on the first 1000 samples:
| | english | non_english | label |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:--------------------------------------|
| type | string | string | list |
| details | (Applause) Trevor Neilson: And also, Tan's mother is here today, in the fourth or fifth row.
| (കൈയ്യടി ) ട്രെവോര് നെല്സണ്: കൂടാതെ താനിന്റെ അമ്മയും ഇന്ന് ഇവിടെ ഉണ്ട് നാലാമത്തെയോ അഞ്ചാമത്തെയോ വരിയില്
| [0.4477437138557434, -0.10711782425642014, 0.19890448451042175, 0.2685866355895996, 0.12080372869968414, ...]
|
| (Applause)
| (കൈയ്യടി )
| [0.07853835821151733, 0.18781603872776031, -0.09047681838274002, 0.25601497292518616, -0.5206068754196167, ...]
|
| So a couple of years ago I started a program to try to get the rockstar tech and design people to take a year off and work in the one environment that represents pretty much everything they're supposed to hate; we have them work in government.
| രണ്ടു കൊല്ലങ്ങൾക്കു മുൻപ് ഞാൻ ഒരു സംരഭത്തിനു തുടക്കമിട്ടു ടെക്നിക്കൽ ഡിസൈൻ മേഖലകളിലെ വലിയ താരങ്ങളെ അവരുടെ ഒരു വർഷത്തെ ജോലികളിൽ നിന്നൊക്കെ അടർത്തിയെടുത്ത് മറ്റൊരു മേഖലയിൽ ജോലി ചെയ്യാൻ ക്ഷണിക്കാൻ അതും അവർ ഏറ്റവും കൂടുതൽ വെറുത്തേക്കാവുന്ന ഒരു മേഖലയിൽ: ഞങ്ങൾ അവരെ ഗവൺ മെന്റിനു വേണ്ടി പണിയെടുപ്പിക്കുന്നു.
| [0.10994623601436615, -0.09076910465955734, -0.3843494653701782, 0.33856505155563354, 0.3447953462600708, ...]
|
* Loss: [MSELoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss)
### Evaluation Datasets
#### en-mr
* Dataset: [en-mr](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks) at [604450b](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks/tree/604450baf780fd49257a8541c331e7bb5a90171d)
* Size: 1,000 evaluation samples
* Columns: english
, non_english
, and label
* Approximate statistics based on the first 1000 samples:
| | english | non_english | label |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:--------------------------------------|
| type | string | string | list |
| details | Now I'm going to give you a story.
| मी आज तुम्हाला एक कथा सांगणार आहे.
| [0.19280874729156494, -0.07861180603504181, -0.40782108902931213, 0.3979630172252655, 0.08477412909269333, ...]
|
| It's an Indian story about an Indian woman and her journey.
| एक भारतीय महिला आणि तिच्या वाटचालीची हि एक भारतीय कहाणी आहे.
| [-0.5461456179618835, -0.08608868718147278, -1.2833353281021118, -0.04911373183131218, -0.23803967237472534, ...]
|
| Let me begin with my parents.
| माझ्या पालकांपासून मी सुरु करते.
| [-0.6556792855262756, -0.7583472728729248, 0.04619251936674118, -0.42713433504104614, -0.18057923018932343, ...]
|
* Loss: [MSELoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss)
#### en-hi
* Dataset: [en-hi](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks) at [604450b](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks/tree/604450baf780fd49257a8541c331e7bb5a90171d)
* Size: 1,000 evaluation samples
* Columns: english
, non_english
, and label
* Approximate statistics based on the first 1000 samples:
| | english | non_english | label |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:--------------------------------------|
| type | string | string | list |
| details | Thank you so much, Chris.
| बहुत बहुत धन्यवाद,क्रिस.
| [0.6755521297454834, 0.03665495663881302, -0.060318127274513245, 0.7523263692855835, -0.6887623071670532, ...]
|
| And it's truly a great honor to have the opportunity to come to this stage twice; I'm extremely grateful.
| और यह सच में एक बड़ा सम्मान है कि मुझे इस मंच पर दोबारा आने का मौका मिला. मैं बहुत आभारी हूँ
| [-0.16181467473506927, -0.18791291117668152, -0.5519911050796509, 0.9049180150032043, -0.747071385383606, ...]
|
| I have been blown away by this conference, and I want to thank all of you for the many nice comments about what I had to say the other night.
| मैं इस सम्मलेन से बहुत आश्चर्यचकित हो गया हूँ, और मैं आप सबको धन्यवाद कहना चाहता हूँ उन सभी अच्छी टिप्पणियों के लिए, जो आपने मेरी पिछली रात के भाषण पर करीं.
| [0.28718116879463196, -0.5640321373939514, -0.14048989117145538, 0.6461797952651978, -0.7105054259300232, ...]
|
* Loss: [MSELoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss)
#### en-bn
* Dataset: [en-bn](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks) at [604450b](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks/tree/604450baf780fd49257a8541c331e7bb5a90171d)
* Size: 1,000 evaluation samples
* Columns: english
, non_english
, and label
* Approximate statistics based on the first 1000 samples:
| | english | non_english | label |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:--------------------------------------|
| type | string | string | list |
| details | The first thing I want to do is say thank you to all of you.
| প্রথমেই আমি আপনাদের সবাইকে ধন্যবাদ জানাতে চাই।
| [-0.00464015593752265, -0.2528093159198761, -0.2521325945854187, 0.8438198566436768, -0.5279574990272522, ...]
|
| The second thing I want to do is introduce my co-author and dear friend and co-teacher.
| দ্বিতীয় যে কাজটা করতে চাই, তা হল- পরিচয় করিয়ে দিতে চাই আমার সহ-লেখক, প্রিয় বন্ধু ও সহ-শিক্ষকের সঙ্গে।
| [0.4810849130153656, -0.14021430909633636, 0.19718660414218903, -0.5403660535812378, 0.06668329983949661, ...]
|
| Ken and I have been working together for almost 40 years.
| কেইন আর আমি একসঙ্গে কাজ করছি প্রায় ৪০ বছর ধরে
| [0.21682043373584747, 0.1364896148443222, -0.4569880962371826, 1.075974464416504, 0.17770573496818542, ...]
|
* Loss: [MSELoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss)
#### en-gu
* Dataset: [en-gu](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks) at [604450b](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks/tree/604450baf780fd49257a8541c331e7bb5a90171d)
* Size: 1,000 evaluation samples
* Columns: english
, non_english
, and label
* Approximate statistics based on the first 1000 samples:
| | english | non_english | label |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:--------------------------------------|
| type | string | string | list |
| details | Thank you so much, Chris.
| ખુબ ખુબ ધન્યવાદ ક્રીસ.
| [0.6755521297454834, 0.03665495663881302, -0.060318127274513245, 0.7523263692855835, -0.6887623071670532, ...]
|
| And it's truly a great honor to have the opportunity to come to this stage twice; I'm extremely grateful.
| અને એ તો ખરેખર મારું અહોભાગ્ય છે. કે મને અહી મંચ પર બીજી વખત આવવાની તક મળી. હું ખુબ જ કૃતજ્ઞ છું .
| [-0.16181467473506927, -0.18791291117668152, -0.5519911050796509, 0.9049180150032043, -0.747071385383606, ...]
|
| I have been blown away by this conference, and I want to thank all of you for the many nice comments about what I had to say the other night.
| હું આ સંમેલન થી ઘણો ખુશ થયો છે, અને તમને બધાને ખુબ જ આભારું છું જે મારે ગયી વખતે કહેવાનું હતું એ બાબતે સારી ટીપ્પણીઓ (કરવા) માટે.
| [0.28718116879463196, -0.5640321373939514, -0.14048989117145538, 0.6461797952651978, -0.7105054259300232, ...]
|
* Loss: [MSELoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss)
#### en-ta
* Dataset: [en-ta](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks) at [604450b](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks/tree/604450baf780fd49257a8541c331e7bb5a90171d)
* Size: 1,000 evaluation samples
* Columns: english
, non_english
, and label
* Approximate statistics based on the first 1000 samples:
| | english | non_english | label |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------|
| type | string | string | list |
| details | Now I'm going to give you a story.
| தற்போது நான் உங்களுக்கு ஒரு செய்தி சொல்லப்போகிறேன்.
| [0.19280874729156494, -0.07861180603504181, -0.40782108902931213, 0.3979630172252655, 0.08477412909269333, ...]
|
| It's an Indian story about an Indian woman and her journey.
| இது ஒரு இந்திய பெண்ணின் பயணத்தைப் பற்றிய செய்தி
| [-0.5461456179618835, -0.08608868718147278, -1.2833353281021118, -0.04911373183131218, -0.23803967237472534, ...]
|
| Let me begin with my parents.
| எனது பெற்றோர்களிலிருந்து தொடங்குகின்றேன்.
| [-0.6556792855262756, -0.7583472728729248, 0.04619251936674118, -0.42713433504104614, -0.18057923018932343, ...]
|
* Loss: [MSELoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss)
#### en-kn
* Dataset: [en-kn](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks) at [604450b](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks/tree/604450baf780fd49257a8541c331e7bb5a90171d)
* Size: 1,000 evaluation samples
* Columns: english
, non_english
, and label
* Approximate statistics based on the first 1000 samples:
| | english | non_english | label |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:--------------------------------------|
| type | string | string | list |
| details | The night before I was heading for Scotland, I was invited to host the final of "China's Got Talent" show in Shanghai with the 80,000 live audience in the stadium.
| ನಾನು ಸ್ಕಾಟ್ ಲ್ಯಾಂಡ್ ಗೆ ಬಾರೋ ಹಿಂದಿನ ರಾತ್ರಿ ಶಾಂಗಯ್ ನಲ್ಲಿ ನಡೆದ "ಚೈನಾ ಹ್ಯಾಸ್ ಗಾಟ್ ದ ಟ್ಯಾಲೆಂಟ್" ಕಾರ್ಯಕ್ರಮದ ಫೈನಲ್ ಎಪಿಸೋಡ್ ಗೆ ನಿರೂಪಕಿಯಾಗಿ ಹೋಗಬೇಕಾಗಿತ್ತು ಸುಮಾರು ೮೦೦೦೦ ಜನ ಸೇರಿದ್ದ ಆ ಸ್ಟೇಡಿಯಂನಲ್ಲಿ
| [-0.7951263189315796, -0.7824558615684509, -0.35716816782951355, -0.32674771547317505, -0.11001778393983841, ...]
|
| Guess who was the performing guest?
| ಯಾರು ಪರ್ಫಾರ್ಮ್ ಮಾಡ್ತಾಯಿದ್ರು ಗೊತ್ತಾ ..?
| [0.35022979974746704, -0.13758550584316254, -0.30045709013938904, -0.26804691553115845, -0.45069000124931335, ...]
|
| Susan Boyle.
| ಸುಸನ್ ಬಾಯ್ಲೇ
| [0.08617134392261505, -0.4860222339630127, -0.18299497663974762, 0.2238812893629074, -0.2626381516456604, ...]
|
* Loss: [MSELoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss)
#### en-te
* Dataset: [en-te](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks) at [604450b](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks/tree/604450baf780fd49257a8541c331e7bb5a90171d)
* Size: 1,000 evaluation samples
* Columns: english
, non_english
, and label
* Approximate statistics based on the first 1000 samples:
| | english | non_english | label |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------|
| type | string | string | list |
| details | A few years ago, I felt like I was stuck in a rut, so I decided to follow in the footsteps of the great American philosopher, Morgan Spurlock, and try something new for 30 days.
| కొన్ని సంవత్సరాల ముందు, నేను బాగా ఆచరానములో ఉన్న ఆచారాన్ని పాతిస్తునాట్లు భావన నాలో కలిగింది. అందుకే నేను గొప్ప అమెరికన్ తత్వవేత్తఅయిన మోర్గన్ స్పుర్లాక్ గారి దారిని పాటించాలనుకున్నాను. అదే 30 రోజులలో కొత్త వాటి కోసం ప్రయత్నించటం
| [-0.08676779270172119, -0.40070414543151855, -0.45080363750457764, -0.14886732399463654, -1.1394624710083008, ...]
|
| The idea is actually pretty simple.
| ఈ ఆలోచన చాలా సులభమైనది.
| [-0.3568742871284485, 0.4474738538265228, 0.05005272850394249, -0.5078891515731812, -0.43413764238357544, ...]
|
| Think about something you've always wanted to add to your life and try it for the next 30 days.
| మీ జీవితములో మీరు చేయాలి అనుకునే పనిని ఆలోచించండి. తరువాతా ఆ పనిని తదుపరి 30 రోజులలో ప్రయత్నించండి.
| [-0.3424505889415741, 0.566207230091095, -0.5596306324005127, -0.12378782778978348, -0.7162606716156006, ...]
|
* Loss: [MSELoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss)
#### en-ml
* Dataset: [en-ml](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks) at [604450b](https://huggingface.co/datasets/aloobun/indic-parallel-sentences-talks/tree/604450baf780fd49257a8541c331e7bb5a90171d)
* Size: 1,000 evaluation samples
* Columns: english
, non_english
, and label
* Approximate statistics based on the first 1000 samples:
| | english | non_english | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------|
| type | string | string | list |
| details | My big idea is a very, very small idea that can unlock billions of big ideas that are at the moment dormant inside us.
| എന്റെ വലിയ ആശയം വാസ്തവത്തില് ഒരു വളരെ ചെറിയ ആശയമാണ് നമ്മുടെ അകത്തു ഉറങ്ങിക്കിടക്കുന്ന കോടിക്കണക്കിനു മഹത്തായ ആശയങ്ങളെ പുറത്തു കൊണ്ടുവരാന് അതിനു കഴിയും
| [-0.5196835398674011, -0.486665815114975, -0.3554009795188904, -0.4337313771247864, -0.2802641689777374, ...]
|
| And my little idea that will do that is sleep.
| എന്റെ ആ ചെറിയ ആശയമാണ് നിദ്ര
| [-0.38715794682502747, 0.13692918419837952, -0.05456114560365677, -0.5371901988983154, -0.4038388431072235, ...]
|
| (Laughter) (Applause) This is a room of type A women.
| (സദസ്സില് ചിരി) (പ്രേക്ഷകരുടെ കൈയ്യടി) ഇത് ഉന്നത ഗണത്തില് പെടുന്ന സ്ത്രീകളുടെ ഒരു മുറിയാണ്
| [0.14095601439476013, 0.5374701619148254, -0.07505392283201218, 0.0036823241971433163, -0.5300045013427734, ...]
|
* Loss: [MSELoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters