SentenceTransformer based on HIT-TMG/KaLM-embedding-multilingual-mini-v1

This is a sentence-transformers model finetuned from HIT-TMG/KaLM-embedding-multilingual-mini-v1. It maps sentences & paragraphs to a 896-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: Qwen2Model 
  (1): Pooling({'word_embedding_dimension': 896, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'The putschists sold out Mali  Read it is extremely serious MOVEMENT COORDINATION OF AZAWAD (CMA) - having regard to the CMA charter - having regard to the Internal Regulations of the Management Committee - given the need for the service STEERING COMMITTEE Decision N°013/Pdt CMA Bearing the Boundary of the State of AZAWAD and the State of Mali. The President of the CMA: تنسيقية الحركات الأزوادية DECIDED: Article 1: The start of bormage works between the State of Azawad and the State of Mali in order to avoid all conflicts of interest. Article 2: Prohibition of all military operations without the prior agreement of 40-1tl.: 0:51 +1.XIA the State of Azawad and its partners (Barkhane and Minusma) who provided efforts for our independence. Amplification: EMGA/CMA.. Fama area of Gao01 Minusma Kidal: Barkhane Kidal: Article 3: This decision takes effect from the date of its signature and will be recorded and published wherever needed. 01 ...01 01 Kidal, February 2, 2004 THE PRESIDENTIA SIDI IBRAHIM OULD SIDATT',
    'The CMA announces the start of the demarcation between the State of Azawad and the State of Mali Please note, this document attributed to former Tuareg separatist rebels is a fake',
    'Pfizer announces Covid-19 vaccine update with Microsoft chip for symptom reduction Pfizer did not announce an agreement with Microsoft: the article about the chip in the covid vaccine is a satire',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 896]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 21,769 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 2 tokens
    • mean: 111.86 tokens
    • max: 512 tokens
    • min: 14 tokens
    • mean: 34.13 tokens
    • max: 127 tokens
  • Samples:
    sentence_0 sentence_1
    CAMP CANBERRA - the biggest gathering in Canberra of all time! Police report they let 1.4 million vehicles through and that was yesterday. People were still pouring in overnight and all morning. Most vehicles had more than one person in them. Amongst the vehicles there were 100s of special buses that came full of people from all over Australia. So doubling that number can still be considered quite a conservative estimate. Population of Australia : 25 million. When 5 million show up, that's 20% of the country and there's HEAPS of us that couldn't make it! Here's a HUGE SHOUT-OUT and THANK YOU to ALL who did! Lighting a candle for all who are rising up all over the world. I Love it when we stand Peacefully in Love as One! TO THE REBELS This is for ones that see the through the deception and lies. That actively resist tyranny and live a life which is lead by their own intuition and heart. They are owned by no one. To the brave Women and Men who courageously risk their reputation and relat... Anti-vaccine mandate protests attract over one million vehicles to Canberra Facebook posts share false claim about size of anti-vaccine mandate protest in Australia
    Typhoon fireworks land in Shanghai emergency (12) The extension line of Shanghai Metro No. 1 began to flood. video Video of flooded metro as Typhoon In-Fa hit Shanghai The video predates Typhoon In-Fa that hit eastern China in 2021
    WHO declares PCR tests unreliable At the same time as Joe Biden was sworn in as the new US President, the WHO questioned the reliability of the PCR test. I can't remember who kept mentioning this before? Was that me in the end? According to the WHO, a PCR test alone is not enough to detect an infection. How many millions of people have sat in quarantine for nothing? Positive test results in symptom-free people are not usable! For my critical statements about the PCR test, I was disparaged by self-appointed fact-checkers. I also remember a discussion on Servus-TV, where Prof. Manfred Spitzer, whom I valued before, suppressed any criticism of the PCR test in a highly authoritarian and almost aggressive manner. "Positive PCR test means infected!" We now know that the basis for the tightened and probably ever-extended lockdown is a political and not a scientific decision. Of course, politicians refer to scientists. However, only to those who support the political course. Two brave editors ... The WHO confirmed that PCR tests are unsuitable for detecting corona WHO recommendations on PCR tests are misinterpreted
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 2
  • per_device_eval_batch_size: 2
  • num_train_epochs: 1
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 2
  • per_device_eval_batch_size: 2
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss
0.0459 500 0.0142
0.0919 1000 0.0367
0.1378 1500 0.0444
0.1837 2000 0.0581
0.2297 2500 0.045
0.2756 3000 0.0736
0.3215 3500 0.0567
0.3675 4000 0.0314
0.4134 4500 0.0362
0.4593 5000 0.029
0.5053 5500 0.0621
0.5512 6000 0.0328
0.5972 6500 0.0279
0.6431 7000 0.0343
0.6890 7500 0.0251
0.7350 8000 0.0437
0.7809 8500 0.0328
0.8268 9000 0.0123
0.8728 9500 0.0177
0.9187 10000 0.0332
0.9646 10500 0.0214

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 3.4.1
  • Transformers: 4.48.3
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.3.2
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
10
Safetensors
Model size
494M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for am-azadi/KaLM-embedding-multilingual-mini-v1_Fine_Tuned_1e

Finetuned
(1)
this model
Finetunes
2 models