SGPT-5.8B-weightedmean-msmarco-specb-bitfit
Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
Training
The model was trained with the parameters:
DataLoader:
torch.utils.data.dataloader.DataLoader
of length 249592 with parameters:
{'batch_size': 2, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
Loss:
sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss
with parameters:
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
Parameters of the fit()-Method:
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 5e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTJModel
(1): Pooling({'word_embedding_dimension': 4096, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
Citing & Authors
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
- Downloads last month
- 402
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Spaces using Muennighoff/SGPT-5.8B-weightedmean-nli-bitfit 13
Evaluation results
- accuracy on MTEB AmazonCounterfactualClassification (en)test set self-reported74.075
- ap on MTEB AmazonCounterfactualClassification (en)test set self-reported37.447
- f1 on MTEB AmazonCounterfactualClassification (en)test set self-reported68.290
- accuracy on MTEB AmazonCounterfactualClassification (de)test set self-reported66.638
- ap on MTEB AmazonCounterfactualClassification (de)test set self-reported78.573
- f1 on MTEB AmazonCounterfactualClassification (de)test set self-reported64.554
- accuracy on MTEB AmazonCounterfactualClassification (en-ext)test set self-reported77.219
- ap on MTEB AmazonCounterfactualClassification (en-ext)test set self-reported25.663
- f1 on MTEB AmazonCounterfactualClassification (en-ext)test set self-reported64.263
- accuracy on MTEB AmazonCounterfactualClassification (ja)test set self-reported58.062