worksphere

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sabber/worksphere-regulations-embedding_bge")
# Run inference
sentences = [
    "How can I ensure that the curing compound we receive at the job site meets the required specifications with the manufacturer's original containers and labels intact?",
    "8. Curing:\n03 00 00\nCONCRETE AND CONCRETE REINFORCING\nPage 10 of 18\n6) Curing compound to be delivered to the job site in the manufacturer's original containers only, with original label containing the following:\na) Manufacturer's name\nb) Trade name of the material\nc) Batch number or symbol with which test samples may be correlated",
    '2. For Large Wind Energy Systems:\na. The minimum acreage for a large wind system shall be established based on the setbacks of the turbine(s) and the height of the turbine(s);\nb. All turbines located within the same large wind system property shall be of a similar tower design, including the type, number of blades, and direction of blade rotation;\nc. Large wind systems shall be setback at least one and one-half times the height of the turbine and rotor diameter from the property line. Large wind systems shall also be setback at least one and one-half times the height of the turbine from above ground telephone, electrical lines, and other uninhabitable structures;\nd. Towers shall not be climbable up to 15 feet above ground level.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric dim_1024 dim_768 dim_512 dim_256
cosine_accuracy@1 0.0307 0.0307 0.03 0.0295
cosine_accuracy@3 0.3986 0.3986 0.3907 0.3774
cosine_accuracy@5 0.5774 0.5774 0.5644 0.5502
cosine_accuracy@10 0.7881 0.7881 0.7819 0.7644
cosine_precision@1 0.0307 0.0307 0.03 0.0295
cosine_precision@3 0.1329 0.1329 0.1302 0.1258
cosine_precision@5 0.1155 0.1155 0.1129 0.11
cosine_precision@10 0.0788 0.0788 0.0782 0.0764
cosine_recall@1 0.0307 0.0307 0.03 0.0295
cosine_recall@3 0.3986 0.3986 0.3907 0.3774
cosine_recall@5 0.5774 0.5774 0.5644 0.5502
cosine_recall@10 0.7881 0.7881 0.7819 0.7644
cosine_ndcg@1 0.0307 0.0307 0.03 0.0295
cosine_ndcg@3 0.2318 0.2318 0.2266 0.2191
cosine_ndcg@5 0.3041 0.3041 0.2968 0.2887
cosine_ndcg@10 0.3753 0.3753 0.3706 0.3613
cosine_mrr@1 0.0307 0.0307 0.03 0.0295
cosine_mrr@3 0.1752 0.1752 0.171 0.1654
cosine_mrr@5 0.2144 0.2144 0.2091 0.2031
cosine_mrr@10 0.2457 0.2457 0.2416 0.235
cosine_map@100 0.2551 0.2551 0.2512 0.2453

Training Details

Training Dataset

Unnamed Dataset

  • Size: 17,198 training samples
  • Columns: question and context
  • Approximate statistics based on the first 1000 samples:
    question context
    type string string
    details
    • min: 14 tokens
    • mean: 26.6 tokens
    • max: 57 tokens
    • min: 23 tokens
    • mean: 140.8 tokens
    • max: 259 tokens
  • Samples:
    question context
    Are there any specific guidelines or requirements for the installation of tree supports as outlined in the regulations? SECTION 32 93 00:
    Cast-in-Place 31 25 14 - Erosion and 32 13 13 - Concrete Paving. 32 13 16 - Decorative Concrete. a. Measurement 1) Measured per each Tree planted. b. Payment 1) The work performed and materials and measured as provided under price bid per each for Tree 2) Various caliper inches. The price bid shall include: 1) Furnishing and installing Tree as 2) Preparing excavation pit 3) Topsoil, fertilizer, mulch, and planting mix, 1 = . , 1 = Tree. , 1 = furnished in accordance with this item "Measurement" will be paid for at the unit for:. planted, 1 = . specified, 1 = . by the Drawings, 1 = . supports, 1 = . [Insert Bid Number], 1 = . [Insert, 1 = . 4), 1 = Plant. Number], 1 = Number]. Engineering Project, 1 = Engineering Project
    Effective July 1, 2024
    32 93 00
    PLANTINGS
    Page 2 of 24
    eee
    BER
    BPRERR
    What specific information do I need to include in my application to meet the standards for grouted installations? 1.1 SUMMARY:
    = . 36, 2 = . 36, 3 = (1) requirements a qualified testing laboratory.. 37, 1 = . 37, 2 = . 37, 3 = Submit a minimum of 3 other similar projects where the proposed grout mix. 38, 1 = . 38, 2 = . 38, 3 = design was used.. 39 40, 1 = . 39 40, 2 = . 39 40, 3 = anticipated volumes of grout to be pumped for each. , 1 = . , 2 = . , 3 = Submit application and reach grouted.. 41, 1 = 4.. 41, 2 = . 41, 3 = Additional requirements for installations of carrier pipe 24-inch and larger:. 42, 1 = . 42, 2 = a.. 42, 3 = Submit work plan describing the carrier pipe installation equipment, materials. 43 44, 1 = . 43 44, 2 = b.. 43 44, 3 = employed. For installations without holding jacks or a restrained spacer, provide buoyant
    CITY OF DENTON STANDARD CONSTRUCTION SPECIFICATION DOCUMENTS Revised October 22, 2020 Effective July 1, 2024
    [Insert Engineering Project Number] [Insert Bid Number]
    eK
    BWN
    nA
    20
    21
    22
    23
    24
    In the event of a quasi judicial hearing, who else besides the site owner(s) should we inform about the decision notification process, and how do we manage their requests for a copy of the decision? Notice of Decision:
    1. Within 10 days after a final decision on an application, the Director shall provide written notification of the decision, unless the applicant was present at the meeting where the decision was made or required by law.
    2. If the review involves a quasi-judicial hearing, the Director shall, within 10 days after a final decision on the application, provide a written notification of the decision to the owner(s) of the subject site (unless the applicant was present at the meeting where the decision was made or required by law), and any other person that submitted a written request for a copy of the decision before its effective date.
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256
        ],
        "matryoshka_weights": [
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 8
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 8
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_1024_cosine_ndcg@10 dim_768_cosine_ndcg@10 dim_512_cosine_ndcg@10 dim_256_cosine_ndcg@10
0.2974 10 2.3168 - - - -
0.5948 20 1.2839 - - - -
0.8922 30 0.6758 - - - -
0.9814 33 - 0.3592 0.3592 0.3556 0.3496
1.1896 40 0.4651 - - - -
1.4870 50 0.3707 - - - -
1.7844 60 0.2941 - - - -
1.9926 67 - 0.3732 0.3732 0.3699 0.3601
2.0818 70 0.2651 - - - -
2.3792 80 0.2341 - - - -
2.6766 90 0.2093 - - - -
2.9740 100 0.1812 0.3747 0.3747 0.3718 0.3626
3.2714 110 0.1717 - - - -
3.5688 120 0.1496 - - - -
3.8662 130 0.1472 - - - -
3.9851 134 - 0.3742 0.3742 0.3727 0.3628
4.1636 140 0.1304 - - - -
4.4610 150 0.1229 - - - -
4.7584 160 0.1085 - - - -
4.9963 168 - 0.3745 0.3745 0.3717 0.361
5.0558 170 0.1144 - - - -
5.3532 180 0.1088 - - - -
5.6506 190 0.0937 - - - -
5.9480 200 0.1023 - - - -
5.9777 201 - 0.3749 0.3749 0.3704 0.3603
6.2454 210 0.0942 - - - -
6.5428 220 0.0919 - - - -
6.8401 230 0.0939 - - - -
6.9888 235 - 0.3755 0.3755 0.3705 0.3603
7.1375 240 0.0925 - - - -
7.4349 250 0.0928 - - - -
7.7323 260 0.0869 - - - -
7.8513 264 - 0.3753 0.3753 0.3706 0.3613
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.10
  • Sentence Transformers: 3.3.1
  • Transformers: 4.41.2
  • PyTorch: 2.4.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
9
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for sabber/worksphere-regulations-embedding_bge

Finetuned
(350)
this model

Evaluation results