AllMiniLM trained on SciGen triplets
This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L12-v2 on the sci_gen_colbert_triplets dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/all-MiniLM-L12-v2
- Maximum Sequence Length: 128 tokens
- Output Dimensionality: 384 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- Language: en
- License: apache-2.0
Model Sources
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("Corran/SciGenAllMiniLM")
sentences = [
'Surveys and interviews: Introducing excerpts from interview data',
"Through surveys and interviews, multiliterate teachers expressed a shared belief in the importance of fostering students' ability to navigate multiple discourse communities.",
'The authors employ a constructivist approach to learning, where students build knowledge through active engagement with multimedia texts and collaborative discussions.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
Evaluation
Metrics
Triplet
Metric |
Value |
cosine_accuracy |
0.6581 |
Triplet
Metric |
Value |
cosine_accuracy |
0.9403 |
Triplet
Metric |
Value |
cosine_accuracy |
0.9746 |
Triplet
Metric |
Value |
cosine_accuracy |
0.9791 |
Triplet
Metric |
Value |
cosine_accuracy |
0.9831 |
Triplet
Metric |
Value |
cosine_accuracy |
0.984 |
Training Details
Training Dataset
sci_gen_colbert_triplets
- Dataset: sci_gen_colbert_triplets at 44071bd
- Size: 35,934 training samples
- Columns:
query
, positive
, and negative
- Approximate statistics based on the first 1000 samples:
|
query |
positive |
negative |
type |
string |
string |
string |
details |
- min: 5 tokens
- mean: 10.24 tokens
- max: 23 tokens
|
- min: 2 tokens
- mean: 39.86 tokens
- max: 80 tokens
|
- min: 18 tokens
- mean: 40.41 tokens
- max: 88 tokens
|
- Samples:
query |
positive |
negative |
Previous research: highlighting negative outcomes |
Despite the widespread use of seniority-based wage systems in labor contracts, previous research has highlighted their negative outcomes, such as inefficiencies and demotivating effects on workers. |
This paper, published in 1974, was among the first to establish the importance of rank-order tournaments as optimal labor contracts in microeconomics. |
Synthesising sources: contrasting evidence or ideas |
Despite the observed chronic enterocolitis in Interleukin-10-deficient mice, some studies suggest that this cytokine plays a protective role in intestinal inflammation in humans (Kurimoto et al., 2001). |
Chronic enterocolitis developed in Interleukin-10-deficient mice, characterized by inflammatory cell infiltration, epithelial damage, and increased production of pro-inflammatory cytokines. |
Previous research: Approaches taken |
Previous research on measuring patient-relevant outcomes in osteoarthritis has primarily relied on self-reported measures, such as the Western Ontario and McMaster Universities Arthritis Index (WOMAC) (Bellamy et al., 1988). |
The WOMAC (Western Ontario and McMaster Universities Osteoarthritis Index) questionnaire has been widely used in physical therapy research to assess the impact of antirheumatic drug therapy on patient-reported outcomes in individuals with hip or knee osteoarthritis. |
- Loss:
MatryoshkaLoss
with these parameters:{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
384,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1
],
"n_dims_per_step": -1
}
Evaluation Dataset
sci_gen_colbert_triplets
- Dataset: sci_gen_colbert_triplets at 44071bd
- Size: 4,492 evaluation samples
- Columns:
query
, positive
, and negative
- Approximate statistics based on the first 1000 samples:
|
query |
positive |
negative |
type |
string |
string |
string |
details |
- min: 5 tokens
- mean: 10.23 tokens
- max: 23 tokens
|
- min: 18 tokens
- mean: 39.83 tokens
- max: 84 tokens
|
- min: 8 tokens
- mean: 39.89 tokens
- max: 84 tokens
|
- Samples:
query |
positive |
negative |
Providing background information: reference to the purpose of the study |
This study aimed to investigate the impact of socioeconomic status on child development, specifically focusing on cognitive, language, and social-emotional domains. |
Children from high socioeconomic status families showed significantly higher IQ scores (M = 112.5, SD = 5.6) compared to children from low socioeconomic status families (M = 104.3, SD = 6.2) in the verbal IQ subtest. |
Providing background information: reference to the literature |
According to previous studies using WinGX suite for small-molecule single-crystal crystallography, the optimization of crystal structures leads to improved accuracy in determining atomic coordinates. |
This paper describes the WinGX suite, a powerful tool for small-molecule single-crystal crystallography that significantly advances the field of crystallography by streamlining data collection and analysis. |
General comments on the relevant literature |
Polymer brushes have gained significant attention in the field of polymer science due to their unique properties, such as controlled thickness, high surface density, and tunable interfacial properties. |
Despite previous reports suggesting that polymer brushes with short grafting densities exhibit poorer performance in terms of adhesion and stability compared to those with higher grafting densities (Liu et al., 2010), our results indicate that the opposite is true for certain types of polymer brushes. |
- Loss:
MatryoshkaLoss
with these parameters:{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
384,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1
],
"n_dims_per_step": -1
}
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: steps
per_device_train_batch_size
: 256
per_device_eval_batch_size
: 256
learning_rate
: 2e-05
num_train_epochs
: 1
warmup_ratio
: 0.1
fp16
: True
batch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: False
do_predict
: False
eval_strategy
: steps
prediction_loss_only
: True
per_device_train_batch_size
: 256
per_device_eval_batch_size
: 256
per_gpu_train_batch_size
: None
per_gpu_eval_batch_size
: None
gradient_accumulation_steps
: 1
eval_accumulation_steps
: None
torch_empty_cache_steps
: None
learning_rate
: 2e-05
weight_decay
: 0.0
adam_beta1
: 0.9
adam_beta2
: 0.999
adam_epsilon
: 1e-08
max_grad_norm
: 1.0
num_train_epochs
: 1
max_steps
: -1
lr_scheduler_type
: linear
lr_scheduler_kwargs
: {}
warmup_ratio
: 0.1
warmup_steps
: 0
log_level
: passive
log_level_replica
: warning
log_on_each_node
: True
logging_nan_inf_filter
: True
save_safetensors
: True
save_on_each_node
: False
save_only_model
: False
restore_callback_states_from_checkpoint
: False
no_cuda
: False
use_cpu
: False
use_mps_device
: False
seed
: 42
data_seed
: None
jit_mode_eval
: False
use_ipex
: False
bf16
: False
fp16
: True
fp16_opt_level
: O1
half_precision_backend
: auto
bf16_full_eval
: False
fp16_full_eval
: False
tf32
: None
local_rank
: 0
ddp_backend
: None
tpu_num_cores
: None
tpu_metrics_debug
: False
debug
: []
dataloader_drop_last
: False
dataloader_num_workers
: 0
dataloader_prefetch_factor
: None
past_index
: -1
disable_tqdm
: False
remove_unused_columns
: True
label_names
: None
load_best_model_at_end
: False
ignore_data_skip
: False
fsdp
: []
fsdp_min_num_params
: 0
fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
fsdp_transformer_layer_cls_to_wrap
: None
accelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
deepspeed
: None
label_smoothing_factor
: 0.0
optim
: adamw_torch
optim_args
: None
adafactor
: False
group_by_length
: False
length_column_name
: length
ddp_find_unused_parameters
: None
ddp_bucket_cap_mb
: None
ddp_broadcast_buffers
: False
dataloader_pin_memory
: True
dataloader_persistent_workers
: False
skip_memory_metrics
: True
use_legacy_prediction_loop
: False
push_to_hub
: False
resume_from_checkpoint
: None
hub_model_id
: None
hub_strategy
: every_save
hub_private_repo
: None
hub_always_push
: False
gradient_checkpointing
: False
gradient_checkpointing_kwargs
: None
include_inputs_for_metrics
: False
include_for_metrics
: []
eval_do_concat_batches
: True
fp16_backend
: auto
push_to_hub_model_id
: None
push_to_hub_organization
: None
mp_parameters
:
auto_find_batch_size
: False
full_determinism
: False
torchdynamo
: None
ray_scope
: last
ddp_timeout
: 1800
torch_compile
: False
torch_compile_backend
: None
torch_compile_mode
: None
dispatch_batches
: None
split_batches
: None
include_tokens_per_second
: False
include_num_input_tokens_seen
: False
neftune_noise_alpha
: None
optim_target_modules
: None
batch_eval_metrics
: False
eval_on_start
: False
use_liger_kernel
: False
eval_use_gather_object
: False
average_tokens_across_devices
: False
prompts
: None
batch_sampler
: no_duplicates
multi_dataset_batch_sampler
: proportional
Training Logs
Epoch |
Step |
Training Loss |
Validation Loss |
SciGen-AllMiniLM_cosine_accuracy |
0 |
0 |
- |
- |
0.6583 |
0.0445 |
100 |
16.2349 |
11.5345 |
0.8566 |
0.0890 |
200 |
8.9725 |
7.1884 |
0.9403 |
0.1002 |
225 |
- |
- |
0.9497 |
0.2226 |
500 |
5.4927 |
4.9641 |
0.9746 |
0.3833 |
861 |
- |
- |
0.9791 |
0.1348 |
19 |
- |
- |
0.9797 |
0.1418 |
20 |
9.2075 |
9.3080 |
0.9793 |
0.2837 |
40 |
9.4694 |
9.0598 |
0.9800 |
0.4255 |
60 |
9.6385 |
8.9469 |
0.9809 |
0.5674 |
80 |
9.1968 |
8.9252 |
0.9824 |
0.7092 |
100 |
9.3628 |
8.8540 |
0.9820 |
0.8511 |
120 |
9.2204 |
8.8064 |
0.9829 |
0.9929 |
140 |
9.3666 |
8.7720 |
0.9831 |
1.0 |
141 |
- |
- |
0.9840 |
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}