metadata
language: []
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:2036
- loss:MultipleNegativesRankingLoss
base_model: google-bert/bert-base-uncased
datasets: []
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
widget:
- source_sentence: >-
Proven ability to establish and lead complex projects and programs within
a multilayered, hierarchical organization.
sentences:
- Managed multiple concurrent projects in a large healthcare organization
- >-
Assisted in project documentation without direct management
responsibilities
- Skilled in creating presentations using Microsoft PowerPoint
- source_sentence: >-
Experience in evaluating and planning projects to minimize scheduled
overtime requirements.
sentences:
- Validated release packages and coordinated Salesforce release cycles
- Oversaw daily housekeeping operations
- Successfully managed facility renovation projects to reduce overtime
- source_sentence: >-
Candidates should have significant experience in a commercial construction
environment, ideally with a minimum of 10 years in the field.
sentences:
- >-
Built strong partnerships with cross-functional teams to deliver
projects
- over 12 years of experience managing commercial construction projects
- 2 years of experience in residential construction
- source_sentence: Possession of strong leadership skills in a Workday professional context.
sentences:
- 3 years of experience with cardiac mapping technologies
- Managed Workday implementation projects and trained team members
- Developed marketing strategies for new products
- source_sentence: >-
Ability to manage TikTok Shop setup and troubleshoot operational issues
effectively.
sentences:
- Troubleshot various operational issues during the setup of a TikTok Shop
- Handled customer support queries for social media platforms
- Consistently maintained client trust through transparent communication
pipeline_tag: sentence-similarity
model-index:
- name: SentenceTransformer based on google-bert/bert-base-uncased
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev
type: sts-dev
metrics:
- type: pearson_cosine
value: 0.7481079446812986
name: Pearson Cosine
- type: spearman_cosine
value: 0.7505186904322839
name: Spearman Cosine
- type: pearson_manhattan
value: 0.7554763601200802
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.758901200634132
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.7545320893124581
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.7581291583714751
name: Spearman Euclidean
- type: pearson_dot
value: 0.6010864985986635
name: Pearson Dot
- type: spearman_dot
value: 0.5940811367263572
name: Spearman Dot
- type: pearson_max
value: 0.7554763601200802
name: Pearson Max
- type: spearman_max
value: 0.758901200634132
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test
type: sts-test
metrics:
- type: pearson_cosine
value: 0.7078369274551736
name: Pearson Cosine
- type: spearman_cosine
value: 0.6860532079702527
name: Spearman Cosine
- type: pearson_manhattan
value: 0.7195614364247788
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6992090523383406
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.7199683293098692
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.699729559217933
name: Spearman Euclidean
- type: pearson_dot
value: 0.4876300833689144
name: Pearson Dot
- type: spearman_dot
value: 0.47135994215107385
name: Spearman Dot
- type: pearson_max
value: 0.7199683293098692
name: Pearson Max
- type: spearman_max
value: 0.699729559217933
name: Spearman Max
SentenceTransformer based on google-bert/bert-base-uncased
This is a sentence-transformers model finetuned from google-bert/bert-base-uncased. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: google-bert/bert-base-uncased
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("trbeers/bert-base-uncased-nli-v0")
# Run inference
sentences = [
'Ability to manage TikTok Shop setup and troubleshoot operational issues effectively.',
'Troubleshot various operational issues during the setup of a TikTok Shop',
'Handled customer support queries for social media platforms',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Semantic Similarity
- Dataset:
sts-dev
- Evaluated with
EmbeddingSimilarityEvaluator
Metric | Value |
---|---|
pearson_cosine | 0.7481 |
spearman_cosine | 0.7505 |
pearson_manhattan | 0.7555 |
spearman_manhattan | 0.7589 |
pearson_euclidean | 0.7545 |
spearman_euclidean | 0.7581 |
pearson_dot | 0.6011 |
spearman_dot | 0.5941 |
pearson_max | 0.7555 |
spearman_max | 0.7589 |
Semantic Similarity
- Dataset:
sts-test
- Evaluated with
EmbeddingSimilarityEvaluator
Metric | Value |
---|---|
pearson_cosine | 0.7078 |
spearman_cosine | 0.6861 |
pearson_manhattan | 0.7196 |
spearman_manhattan | 0.6992 |
pearson_euclidean | 0.72 |
spearman_euclidean | 0.6997 |
pearson_dot | 0.4876 |
spearman_dot | 0.4714 |
pearson_max | 0.72 |
spearman_max | 0.6997 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 2,036 training samples
- Columns:
anchor
,positive
, andnegative
- Approximate statistics based on the first 1000 samples:
anchor positive negative type string string string details - min: 7 tokens
- mean: 16.07 tokens
- max: 39 tokens
- min: 7 tokens
- mean: 11.23 tokens
- max: 24 tokens
- min: 5 tokens
- mean: 8.39 tokens
- max: 15 tokens
- Samples:
anchor positive negative Sensitivity to the needs of patients, families, and physicians to deliver compassionate care.
worked closely with families to address patient concerns
specialized in technical equipment management without direct patient contact
Ability to lift 25 lbs. or more as required for handling athletic equipment.
Handled and organized equipment, ensuring safe lifting of heavy items
Coordinated scheduling for team practices and meetings
The candidate should have significant development experience, preferably around 10 years.
developed and implemented data architecture projects for a decade
worked in customer service for 5 years
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Evaluation Dataset
Unnamed Dataset
- Size: 510 evaluation samples
- Columns:
anchor
,positive
, andnegative
- Approximate statistics based on the first 1000 samples:
anchor positive negative type string string string details - min: 8 tokens
- mean: 16.39 tokens
- max: 34 tokens
- min: 6 tokens
- mean: 11.34 tokens
- max: 20 tokens
- min: 5 tokens
- mean: 8.41 tokens
- max: 16 tokens
- Samples:
anchor positive negative Qualified to provide personalized and friendly client interactions
Assisted clients with inquiries and ensured a welcoming environment
Conducted market research for product development
Understanding of network architecture principles and design patterns is critical.
Designed and implemented network architectures for cloud-based solutions
Managed on-premises server infrastructure
Knowledge of cloud technologies and their implications for customer engagement.
Managed customer onboarding for cloud-based services
Handled sales inquiries for software licenses
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 128per_device_eval_batch_size
: 128num_train_epochs
: 1warmup_ratio
: 0.1batch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 128per_device_eval_batch_size
: 128per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falsebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | loss | sts-dev_spearman_cosine | sts-test_spearman_cosine |
---|---|---|---|---|
0 | 0 | - | 0.5931 | - |
0.625 | 10 | 1.4252 | 0.7505 | - |
1.0 | 16 | - | - | 0.6861 |
Framework Versions
- Python: 3.10.11
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.1
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}