SentenceTransformer

This is a sentence-transformers model trained on the sci_gen_colbert_triplets dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: inf tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): StaticEmbedding(
    (embedding): EmbeddingBag(30522, 768, mode='mean')
  )
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Corran/SciGenNomicEmbedStatic")
# Run inference
sentences = [
    'Surveys and interviews: Introducing excerpts from interview data',
    "Through surveys and interviews, multiliterate teachers expressed a shared belief in the importance of fostering students' ability to navigate multiple discourse communities.",
    'The authors employ a constructivist approach to learning, where students build knowledge through active engagement with multimedia texts and collaborative discussions.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.8918
cosine_accuracy@3 0.9308
cosine_accuracy@5 0.9481
cosine_accuracy@10 0.9668
cosine_precision@1 0.8918
cosine_precision@3 0.3103
cosine_precision@5 0.1896
cosine_precision@10 0.0967
cosine_recall@1 0.8918
cosine_recall@3 0.9308
cosine_recall@5 0.9481
cosine_recall@10 0.9668
cosine_ndcg@10 0.9279
cosine_mrr@10 0.9157
cosine_map@100 0.9171

Training Details

Training Dataset

sci_gen_colbert_triplets

  • Dataset: sci_gen_colbert_triplets at 44071bd
  • Size: 35,934 training samples
  • Columns: query, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    query positive negative
    type string string string
    details
    • min: 20 characters
    • mean: 50.28 characters
    • max: 120 characters
    • min: 0 characters
    • mean: 206.53 characters
    • max: 401 characters
    • min: 96 characters
    • mean: 209.67 characters
    • max: 418 characters
  • Samples:
    query positive negative
    Previous research: highlighting negative outcomes Despite the widespread use of seniority-based wage systems in labor contracts, previous research has highlighted their negative outcomes, such as inefficiencies and demotivating effects on workers. This paper, published in 1974, was among the first to establish the importance of rank-order tournaments as optimal labor contracts in microeconomics.
    Synthesising sources: contrasting evidence or ideas Despite the observed chronic enterocolitis in Interleukin-10-deficient mice, some studies suggest that this cytokine plays a protective role in intestinal inflammation in humans (Kurimoto et al., 2001). Chronic enterocolitis developed in Interleukin-10-deficient mice, characterized by inflammatory cell infiltration, epithelial damage, and increased production of pro-inflammatory cytokines.
    Previous research: Approaches taken Previous research on measuring patient-relevant outcomes in osteoarthritis has primarily relied on self-reported measures, such as the Western Ontario and McMaster Universities Arthritis Index (WOMAC) (Bellamy et al., 1988). The WOMAC (Western Ontario and McMaster Universities Osteoarthritis Index) questionnaire has been widely used in physical therapy research to assess the impact of antirheumatic drug therapy on patient-reported outcomes in individuals with hip or knee osteoarthritis.
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            384,
            256,
            128,
            64,
            32
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Evaluation Dataset

sci_gen_colbert_triplets

  • Dataset: sci_gen_colbert_triplets at 44071bd
  • Size: 4,492 evaluation samples
  • Columns: query, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    query positive negative
    type string string string
    details
    • min: 20 characters
    • mean: 50.59 characters
    • max: 120 characters
    • min: 98 characters
    • mean: 203.98 characters
    • max: 448 characters
    • min: 36 characters
    • mean: 204.82 characters
    • max: 422 characters
  • Samples:
    query positive negative
    Providing background information: reference to the purpose of the study This study aimed to investigate the impact of socioeconomic status on child development, specifically focusing on cognitive, language, and social-emotional domains. Children from high socioeconomic status families showed significantly higher IQ scores (M = 112.5, SD = 5.6) compared to children from low socioeconomic status families (M = 104.3, SD = 6.2) in the verbal IQ subtest.
    Providing background information: reference to the literature According to previous studies using WinGX suite for small-molecule single-crystal crystallography, the optimization of crystal structures leads to improved accuracy in determining atomic coordinates. This paper describes the WinGX suite, a powerful tool for small-molecule single-crystal crystallography that significantly advances the field of crystallography by streamlining data collection and analysis.
    General comments on the relevant literature Polymer brushes have gained significant attention in the field of polymer science due to their unique properties, such as controlled thickness, high surface density, and tunable interfacial properties. Despite previous reports suggesting that polymer brushes with short grafting densities exhibit poorer performance in terms of adhesion and stability compared to those with higher grafting densities (Liu et al., 2010), our results indicate that the opposite is true for certain types of polymer brushes.
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            384,
            256,
            128,
            64,
            32
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 4096
  • per_device_eval_batch_size: 4096
  • learning_rate: 0.02
  • num_train_epochs: 50
  • warmup_ratio: 0.1
  • fp16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 4096
  • per_device_eval_batch_size: 4096
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 0.02
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 50
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss SciGen-Eval-Set_cosine_ndcg@10
-1 -1 - - 0.0860
1.1111 10 64.4072 61.6146 0.0919
2.2222 20 60.2737 56.0852 0.1130
3.3333 30 53.8742 50.1738 0.1611
4.4444 40 47.9741 45.6099 0.2666
5.5556 50 43.3533 42.3335 0.4579
6.6667 60 39.8746 40.0990 0.6244
7.7778 70 37.4077 38.4205 0.7223
8.8889 80 35.3558 37.0939 0.7847
10.0 90 33.5816 36.0200 0.8248
11.1111 100 32.4019 35.1148 0.8469
12.2222 110 31.3427 34.3602 0.8658
13.3333 120 30.4578 33.7324 0.8788
14.4444 130 29.7019 33.2120 0.8882
15.5556 140 29.1315 32.7679 0.8963
16.6667 150 28.6226 32.3942 0.9016
17.7778 160 28.195 32.0693 0.9061
18.8889 170 27.8242 31.7708 0.9096
20.0 180 27.373 31.5369 0.9137
21.1111 190 27.2436 31.3331 0.9168
22.2222 200 27.0084 31.1571 0.9188
23.3333 210 26.8023 31.0074 0.9205
24.4444 220 26.6754 30.8726 0.9217
25.5556 230 26.4875 30.7545 0.9224
26.6667 240 26.3846 30.6494 0.9236
27.7778 250 26.2546 30.5660 0.9243
28.8889 260 26.1752 30.4826 0.9248
30.0 270 25.9247 30.4060 0.9252
31.1111 280 25.9807 30.3540 0.9261
32.2222 290 25.9153 30.3040 0.9262
33.3333 300 25.8643 30.2585 0.9265
34.4444 310 25.7946 30.2183 0.9270
35.5556 320 25.7723 30.1799 0.9272
36.6667 330 25.7091 30.1539 0.9275
37.7778 340 25.6655 30.1296 0.9275
38.8889 350 25.6465 30.1120 0.9276
40.0 360 25.4654 30.0834 0.9279

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 3.4.0
  • Transformers: 4.47.1
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.2.1
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Corran/SciGenNomicEmbedStatic

Evaluation results