BGE base Financial Matryoshka
This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-base-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
- Training Dataset:
- Language: en
- License: apache-2.0
Model Sources
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("amichelini/bge-base-financial-matryoshka")
sentences = [
'2023 highlights include net revenues of $5,003.3 million which decreased 15% from $5,856.7 million in 2022.',
"How did Hasbro's net revenues in 2023 compare to the previous year?",
'How much cash did continuing operating activities provide in 2023?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
Evaluation
Metrics
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.68 |
cosine_accuracy@3 |
0.81 |
cosine_accuracy@5 |
0.8514 |
cosine_accuracy@10 |
0.8943 |
cosine_precision@1 |
0.68 |
cosine_precision@3 |
0.27 |
cosine_precision@5 |
0.1703 |
cosine_precision@10 |
0.0894 |
cosine_recall@1 |
0.68 |
cosine_recall@3 |
0.81 |
cosine_recall@5 |
0.8514 |
cosine_recall@10 |
0.8943 |
cosine_ndcg@10 |
0.7882 |
cosine_mrr@10 |
0.7541 |
cosine_map@100 |
0.7585 |
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.68 |
cosine_accuracy@3 |
0.8029 |
cosine_accuracy@5 |
0.8457 |
cosine_accuracy@10 |
0.8971 |
cosine_precision@1 |
0.68 |
cosine_precision@3 |
0.2676 |
cosine_precision@5 |
0.1691 |
cosine_precision@10 |
0.0897 |
cosine_recall@1 |
0.68 |
cosine_recall@3 |
0.8029 |
cosine_recall@5 |
0.8457 |
cosine_recall@10 |
0.8971 |
cosine_ndcg@10 |
0.7871 |
cosine_mrr@10 |
0.752 |
cosine_map@100 |
0.7559 |
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.6714 |
cosine_accuracy@3 |
0.7986 |
cosine_accuracy@5 |
0.8457 |
cosine_accuracy@10 |
0.8843 |
cosine_precision@1 |
0.6714 |
cosine_precision@3 |
0.2662 |
cosine_precision@5 |
0.1691 |
cosine_precision@10 |
0.0884 |
cosine_recall@1 |
0.6714 |
cosine_recall@3 |
0.7986 |
cosine_recall@5 |
0.8457 |
cosine_recall@10 |
0.8843 |
cosine_ndcg@10 |
0.7799 |
cosine_mrr@10 |
0.7462 |
cosine_map@100 |
0.7506 |
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.66 |
cosine_accuracy@3 |
0.7914 |
cosine_accuracy@5 |
0.8286 |
cosine_accuracy@10 |
0.8814 |
cosine_precision@1 |
0.66 |
cosine_precision@3 |
0.2638 |
cosine_precision@5 |
0.1657 |
cosine_precision@10 |
0.0881 |
cosine_recall@1 |
0.66 |
cosine_recall@3 |
0.7914 |
cosine_recall@5 |
0.8286 |
cosine_recall@10 |
0.8814 |
cosine_ndcg@10 |
0.7707 |
cosine_mrr@10 |
0.7354 |
cosine_map@100 |
0.7396 |
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.6271 |
cosine_accuracy@3 |
0.7543 |
cosine_accuracy@5 |
0.8014 |
cosine_accuracy@10 |
0.86 |
cosine_precision@1 |
0.6271 |
cosine_precision@3 |
0.2514 |
cosine_precision@5 |
0.1603 |
cosine_precision@10 |
0.086 |
cosine_recall@1 |
0.6271 |
cosine_recall@3 |
0.7543 |
cosine_recall@5 |
0.8014 |
cosine_recall@10 |
0.86 |
cosine_ndcg@10 |
0.7404 |
cosine_mrr@10 |
0.7026 |
cosine_map@100 |
0.7069 |
Training Details
Training Dataset
json
- Dataset: json
- Size: 6,300 training samples
- Columns:
positive
and anchor
- Approximate statistics based on the first 1000 samples:
|
positive |
anchor |
type |
string |
string |
details |
- min: 4 tokens
- mean: 46.33 tokens
- max: 326 tokens
|
- min: 7 tokens
- mean: 20.38 tokens
- max: 43 tokens
|
- Samples:
positive |
anchor |
The data includes transaction and integration costs listed as follows for each year: $0, $0, $59, $0, $0, $0, $269, $91, $39, $269, $91, $98. |
What were the values of transaction and integration costs for each of the years provided in the data? |
In 2023, Delta Air Lines announced an increase in remuneration from their partnership with American Express to $6.8 billion, with expected growth of 10% in 2024. |
What was the remuneration from Delta Air Lines' partnership with American Express in 2023, and what is the growth expectation for 2024? |
On December 1, 2023, we advanced $10.0 billion under the ASR program and received approximately 215 million shares of common stock with a value of $6.8 billion, which were immediately retired. |
What significant financial activity occurred on December 1, 2023, under the ASR program? |
- Loss:
MatryoshkaLoss
with these parameters:{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epoch
per_device_train_batch_size
: 32
per_device_eval_batch_size
: 16
gradient_accumulation_steps
: 16
learning_rate
: 2e-05
num_train_epochs
: 4
lr_scheduler_type
: cosine
warmup_ratio
: 0.1
bf16
: True
tf32
: True
load_best_model_at_end
: True
optim
: adamw_torch_fused
batch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: False
do_predict
: False
eval_strategy
: epoch
prediction_loss_only
: True
per_device_train_batch_size
: 32
per_device_eval_batch_size
: 16
per_gpu_train_batch_size
: None
per_gpu_eval_batch_size
: None
gradient_accumulation_steps
: 16
eval_accumulation_steps
: None
torch_empty_cache_steps
: None
learning_rate
: 2e-05
weight_decay
: 0.0
adam_beta1
: 0.9
adam_beta2
: 0.999
adam_epsilon
: 1e-08
max_grad_norm
: 1.0
num_train_epochs
: 4
max_steps
: -1
lr_scheduler_type
: cosine
lr_scheduler_kwargs
: {}
warmup_ratio
: 0.1
warmup_steps
: 0
log_level
: passive
log_level_replica
: warning
log_on_each_node
: True
logging_nan_inf_filter
: True
save_safetensors
: True
save_on_each_node
: False
save_only_model
: False
restore_callback_states_from_checkpoint
: False
no_cuda
: False
use_cpu
: False
use_mps_device
: False
seed
: 42
data_seed
: None
jit_mode_eval
: False
use_ipex
: False
bf16
: True
fp16
: False
fp16_opt_level
: O1
half_precision_backend
: auto
bf16_full_eval
: False
fp16_full_eval
: False
tf32
: True
local_rank
: 0
ddp_backend
: None
tpu_num_cores
: None
tpu_metrics_debug
: False
debug
: []
dataloader_drop_last
: False
dataloader_num_workers
: 0
dataloader_prefetch_factor
: None
past_index
: -1
disable_tqdm
: False
remove_unused_columns
: True
label_names
: None
load_best_model_at_end
: True
ignore_data_skip
: False
fsdp
: []
fsdp_min_num_params
: 0
fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
fsdp_transformer_layer_cls_to_wrap
: None
accelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
deepspeed
: None
label_smoothing_factor
: 0.0
optim
: adamw_torch_fused
optim_args
: None
adafactor
: False
group_by_length
: False
length_column_name
: length
ddp_find_unused_parameters
: None
ddp_bucket_cap_mb
: None
ddp_broadcast_buffers
: False
dataloader_pin_memory
: True
dataloader_persistent_workers
: False
skip_memory_metrics
: True
use_legacy_prediction_loop
: False
push_to_hub
: False
resume_from_checkpoint
: None
hub_model_id
: None
hub_strategy
: every_save
hub_private_repo
: False
hub_always_push
: False
gradient_checkpointing
: False
gradient_checkpointing_kwargs
: None
include_inputs_for_metrics
: False
eval_do_concat_batches
: True
fp16_backend
: auto
push_to_hub_model_id
: None
push_to_hub_organization
: None
mp_parameters
:
auto_find_batch_size
: False
full_determinism
: False
torchdynamo
: None
ray_scope
: last
ddp_timeout
: 1800
torch_compile
: False
torch_compile_backend
: None
torch_compile_mode
: None
dispatch_batches
: None
split_batches
: None
include_tokens_per_second
: False
include_num_input_tokens_seen
: False
neftune_noise_alpha
: None
optim_target_modules
: None
batch_eval_metrics
: False
eval_on_start
: False
eval_use_gather_object
: False
batch_sampler
: no_duplicates
multi_dataset_batch_sampler
: proportional
Training Logs
Epoch |
Step |
Training Loss |
dim_128_cosine_map@100 |
dim_256_cosine_map@100 |
dim_512_cosine_map@100 |
dim_64_cosine_map@100 |
dim_768_cosine_map@100 |
0 |
0 |
- |
0.6648 |
0.6922 |
0.6982 |
0.6028 |
0.7029 |
0.8122 |
10 |
1.5362 |
- |
- |
- |
- |
- |
0.9746 |
12 |
- |
0.7259 |
0.7402 |
0.7481 |
0.6913 |
0.7510 |
1.6244 |
20 |
0.6012 |
- |
- |
- |
- |
- |
1.9492 |
24 |
- |
0.7341 |
0.7503 |
0.7554 |
0.7051 |
0.7576 |
2.4365 |
30 |
0.4225 |
- |
- |
- |
- |
- |
2.9239 |
36 |
- |
0.7383 |
0.7522 |
0.7569 |
0.7063 |
0.7570 |
3.2487 |
40 |
0.358 |
- |
- |
- |
- |
- |
3.8985 |
48 |
- |
0.7396 |
0.7506 |
0.7559 |
0.7069 |
0.7585 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}