Nessrine9's picture
Finetuned model on SNLI
0604176 verified
metadata
base_model: sentence-transformers/all-MiniLM-L12-v2
library_name: sentence-transformers
metrics:
  - pearson_cosine
  - spearman_cosine
  - pearson_manhattan
  - spearman_manhattan
  - pearson_euclidean
  - spearman_euclidean
  - pearson_dot
  - spearman_dot
  - pearson_max
  - spearman_max
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:100000
  - loss:CosineSimilarityLoss
widget:
  - source_sentence: >-
      A woman wearing a yellow shirt is holding a plate which contains a piece
      of cake.
    sentences:
      - >-
        The woman in the yellow shirt might have cut the cake and placed it on
        the plate.
      - Male bicyclists compete in the Tour de France.
      - The man is walking
  - source_sentence: People gather and talk in the street.
    sentences:
      - Club goers outside discussing the police raid.
      - a woman is leaning on a skateboard
      - There are many people singing.
  - source_sentence: A child sliding face first down a metal tube
    sentences:
      - A man with a red shirt is bowling with his 2 sons.
      - The child is sliding face first
      - There is a girl in a dress.
  - source_sentence: A man walking a gray poodle is walking past a billboard with a cow on it.
    sentences:
      - >-
        A house build with wooden stairs and the family is enjoying sitting on
        them
      - A woman is playing checkers.
      - The man is walking his grey cat.
  - source_sentence: A man fishing in a pointy blue boat on a river lined with palm trees.
    sentences:
      - Labrador Retrievers are energetic dogs that will play catch for hours.
      - A man rubs his bald head.
      - The man is with friends.
model-index:
  - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L12-v2
    results:
      - task:
          type: semantic-similarity
          name: Semantic Similarity
        dataset:
          name: snli dev
          type: snli-dev
        metrics:
          - type: pearson_cosine
            value: 0.5002872232214081
            name: Pearson Cosine
          - type: spearman_cosine
            value: 0.49187589438593304
            name: Spearman Cosine
          - type: pearson_manhattan
            value: 0.47522303163337404
            name: Pearson Manhattan
          - type: spearman_manhattan
            value: 0.49169237941097593
            name: Spearman Manhattan
          - type: pearson_euclidean
            value: 0.47599896939605724
            name: Pearson Euclidean
          - type: spearman_euclidean
            value: 0.49187587264847454
            name: Spearman Euclidean
          - type: pearson_dot
            value: 0.5002872256206143
            name: Pearson Dot
          - type: spearman_dot
            value: 0.49187604689169206
            name: Spearman Dot
          - type: pearson_max
            value: 0.5002872256206143
            name: Pearson Max
          - type: spearman_max
            value: 0.49187604689169206
            name: Spearman Max

SentenceTransformer based on sentence-transformers/all-MiniLM-L12-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L12-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L12-v2
  • Maximum Sequence Length: 128 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Nessrine9/Finetune2-MiniLM-L12-v2")
# Run inference
sentences = [
    'A man fishing in a pointy blue boat on a river lined with palm trees.',
    'The man is with friends.',
    'A man rubs his bald head.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.5003
spearman_cosine 0.4919
pearson_manhattan 0.4752
spearman_manhattan 0.4917
pearson_euclidean 0.476
spearman_euclidean 0.4919
pearson_dot 0.5003
spearman_dot 0.4919
pearson_max 0.5003
spearman_max 0.4919

Training Details

Training Dataset

Unnamed Dataset

  • Size: 100,000 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 4 tokens
    • mean: 16.38 tokens
    • max: 61 tokens
    • min: 4 tokens
    • mean: 10.56 tokens
    • max: 43 tokens
    • min: 0.0
    • mean: 0.5
    • max: 1.0
  • Samples:
    sentence_0 sentence_1 label
    Three men in an art gallery posing for the camera. Paintings are nearby. 0.5
    A shirtless man wearing a vest walks on a stage with his arms up. The man is about to perform. 0.5
    The man is walking outside near a rocky river. The man is walking 0.0
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 4
  • fp16: True
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss snli-dev_spearman_max
0.08 500 0.1842 0.3333
0.16 1000 0.1489 0.3449
0.24 1500 0.1427 0.3633
0.32 2000 0.1391 0.3854
0.4 2500 0.1401 0.4015
0.48 3000 0.139 0.3982
0.56 3500 0.1352 0.4327
0.64 4000 0.1319 0.4262
0.72 4500 0.1336 0.4034
0.8 5000 0.1321 0.4021
0.88 5500 0.1309 0.4294
0.96 6000 0.1271 0.4198
1.0 6250 - 0.4317
1.04 6500 0.132 0.4445
1.12 7000 0.1296 0.4509
1.2 7500 0.1236 0.4559
1.28 8000 0.1257 0.4542
1.3600 8500 0.1236 0.4507
1.44 9000 0.1277 0.4540
1.52 9500 0.1249 0.4664
1.6 10000 0.1208 0.4418
1.6800 10500 0.1228 0.4457
1.76 11000 0.1212 0.4222
1.8400 11500 0.1203 0.4507
1.92 12000 0.119 0.4572
2.0 12500 0.1196 0.4667
2.08 13000 0.1194 0.4733
2.16 13500 0.1172 0.4786
2.24 14000 0.1172 0.4765
2.32 14500 0.1145 0.4717
2.4 15000 0.1167 0.4803
2.48 15500 0.1177 0.4678
2.56 16000 0.1162 0.4805
2.64 16500 0.1137 0.4780
2.7200 17000 0.1153 0.4788
2.8 17500 0.115 0.4784
2.88 18000 0.1128 0.4864
2.96 18500 0.11 0.4812
3.0 18750 - 0.4823
3.04 19000 0.1136 0.4900
3.12 19500 0.1135 0.4897
3.2 20000 0.1094 0.4856
3.2800 20500 0.1108 0.4889
3.36 21000 0.1083 0.4909
3.44 21500 0.1133 0.4892
3.52 22000 0.1106 0.4910
3.6 22500 0.1079 0.4888
3.68 23000 0.1091 0.4890
3.76 23500 0.1079 0.4822
3.84 24000 0.1087 0.4887
3.92 24500 0.1066 0.4926
4.0 25000 0.1069 0.4919

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.2.1
  • Transformers: 4.44.2
  • PyTorch: 2.5.0+cu121
  • Accelerate: 0.34.2
  • Datasets: 3.0.2
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}