bge-base-en-v1.5-ft / README.md
aritrasen's picture
Add new SentenceTransformer model.
e84ed2e verified
metadata
base_model: BAAI/bge-base-en-v1.5
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:21
  - loss:MultipleNegativesRankingLoss
widget:
  - source_sentence: >-
      | Config                            | Model                  | Epochs |
      Max seq length | Micro batch size | Machine | Training runtime | Cost |
      Peak memory | Validation loss | Validation perplexity | Multitask score
      (MMLU) |

      | --------------------------------- | ---------------------- | ------ |
      -------------- | ---------------- | ------- | ---------------- | ---- |
      ----------- | --------------- | --------------------- | --------------- |

      | falcon-7b/lora.yaml               | falcon-7b              | 4      |
      512            | 1                | 1xA10G  | 24.84 min        | $0.7 |
      16.69 GB    | 0.945           | 2.573                 | 26.2%           |

      | falcon-7b/lora.yaml               | falcon-7b              | 4      |
      512            | 1                | 4xA10G  | 24.94 min        | $2.0 |
      16.69 GB    | 0.945           | 2.573                 | 26.4%           |

      | falcon-7b/qlora.yaml              | falcon-7b              | 4      |
      512            | 1                | 1xA10G  | 50.85 min        | $1.5 |
      9.44 GB     | 0.993           | 2.699                 | 26.3%           |

      | falcon-7b/qlora.yaml              | falcon-7b              | 4      |
      512            | 1                | 4xA10G  | 50.88 min        | $4.1 |
      9.44 GB     | 0.993           | 2.699                 | 26.3%           |

      |                                   |                        |       
      |                |                  |         |                  |     
      |             |                 |                       |                
      |

      | gemma-2b/full.yaml                | gemma-2b               | 1      |
      512            | 1                | 4xA10G  | 14.06 min        | $1.1 |
      17.43 GB    | 1.021           | 2.777                 | 32.4%           |

      | gemma-2b/lora.yaml                | gemma-2b               | 2      |
      512            | 2                | 1xA10G  | 9.41 min         | $0.3 |
      12.62 GB    | 0.981           | 2.666                 | 34.4%           |
    sentences:
      - >
        What is the command to download the pretrained model weights for the
        Llama-2-7b-hf model?
      - |
        What is the version of nvfuser\_cu121 used?
      - >
        What is the training runtime for the gemma-2b model with the lora
        configuration?
  - source_sentence: >-
      # Serve and Deploy LLMs


      This document shows how you can serve a LitGPT for deployment. 


       

      ## Serve an LLM


      This section illustrates how we can set up an inference server for a phi-2
      LLM using `litgpt serve` that is minimal and highly scalable.



       

      ## Step 1: Start the inference server



      ```bash

      # 1) Download a pretrained model (alternatively, use your own finetuned
      model)

      litgpt download --repo_id microsoft/phi-2


      # 2) Start the server

      litgpt serve --checkpoint_dir checkpoints/microsoft/phi-2

      ```


      > [!TIP]

      > Use `litgpt serve --help` to display additional options, including the
      port, devices, LLM temperature setting, and more.



       

      ## Step 2: Query the inference server


      You can now send requests to the inference server you started in step 2.
      For example, in a new Python session, we can send requests to the
      inference server as follows:



      ```python

      import requests, json


      response = requests.post(
          "http://127.0.0.1:8000/predict", 
          json={"prompt": "Fix typos in the following sentence: Exampel input"}
      )


      print(response.json()["output"])

      ```


      Executing the code above prints the following output:


      ```

      Instruct: Fix typos in the following sentence: Exampel input

      Output: Example input.

      ```
    sentences:
      - >
        What command do I use to convert the finetuned model to a HF transformer
        model?
      - |
        How do you merge LoRA weights into the original model's checkpoint?
      - |
        How can I start an inference server for a phi-2 LLM using litgpt serve?

SentenceTransformer based on BAAI/bge-base-en-v1.5

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("aritrasen/bge-base-en-v1.5-ft")
# Run inference
sentences = [
    '# Serve and Deploy LLMs\n\nThis document shows how you can serve a LitGPT for deployment. \n\n \n## Serve an LLM\n\nThis section illustrates how we can set up an inference server for a phi-2 LLM using `litgpt serve` that is minimal and highly scalable.\n\n\n \n## Step 1: Start the inference server\n\n\n```bash\n# 1) Download a pretrained model (alternatively, use your own finetuned model)\nlitgpt download --repo_id microsoft/phi-2\n\n# 2) Start the server\nlitgpt serve --checkpoint_dir checkpoints/microsoft/phi-2\n```\n\n> [!TIP]\n> Use `litgpt serve --help` to display additional options, including the port, devices, LLM temperature setting, and more.\n\n\n \n## Step 2: Query the inference server\n\nYou can now send requests to the inference server you started in step 2. For example, in a new Python session, we can send requests to the inference server as follows:\n\n\n```python\nimport requests, json\n\nresponse = requests.post(\n    "http://127.0.0.1:8000/predict", \n    json={"prompt": "Fix typos in the following sentence: Exampel input"}\n)\n\nprint(response.json()["output"])\n```\n\nExecuting the code above prints the following output:\n\n```\nInstruct: Fix typos in the following sentence: Exampel input\nOutput: Example input.\n```',
    'How can I start an inference server for a phi-2 LLM using litgpt serve?\n',
    'What command do I use to convert the finetuned model to a HF transformer model?\n',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 21 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 51 tokens
    • mean: 424.62 tokens
    • max: 512 tokens
    • min: 12 tokens
    • mean: 17.19 tokens
    • max: 26 tokens
  • Samples:
    anchor positive
    7 B
    1. Follow the instructions above to load the model into a Hugging Face transformers model.

    2. Create a model.safetensor file:

    python<br>model.save_pretrained("out/hf-tinyllama/converted/")<br>

    3. Copy the tokenizer files into the model-containing directory:

    bash<br>cp checkpoints/$repo_id/tokenizer* out/hf-tinyllama/converted<br>

    4. Run the evaluation harness, for example:

    bash<br>lm_eval --model hf \<br> --model_args pretrained=out/hf-tinyllama/converted \<br> --tasks "hellaswag,gsm8k,truthfulqa_mc2,mmlu,winogrande,arc_challenge" \<br> --device "cuda:0" \<br> --batch_size 4<br>
    What is the command to run the evaluation harness?
    The LM Evaluation Harness requires a tokenizer to be present in the model checkpoint folder, which we can copy from the original download checkpoint:

    bash<br># Copy the tokenizer needed by the Eval Harness<br>cp checkpoints/microsoft/phi-2/tokenizer*<br>out/converted_model<br>

    Then, we can run the Evaluation Harness as follows:

    bash<br>lm_eval --model hf \<br> --model_args pretrained="out/converted_model" \<br> --tasks "hellaswag,gsm8k,truthfulqa_mc2,mmlu,winogrande,arc_challenge" \<br> --device "cuda:0" \<br> --batch_size 4<br>

     

    > [!TIP]
    > The Evaluation Harness tasks above are those used in Open LLM Leaderboard. You can find a list all supported tasks here.



     
    More information and additional resources

    - tutorials/convert_lit_models: Tutorial on converting LitGPT weights



     

    ## Get involved!

    We appreciate your feedback and contributions. If you have feature requests, questions, or want to contribute code or config files, please don't hesitate to use the GitHub Issue tracker.

    We welcome all individual contributors, regardless of their level of experience or hardware. Your contributions are valuable, and we are excited to see what you can accomplish in this collaborative and supportive environment.

     

    > [!TIP]
    > Unsure about contributing? Check out our How to Contribute to LitGPT guide.

     

    If you have general questions about building with LitGPT, please join our Discord.
    What is the command to run the Evaluation Harness?
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 10 evaluation samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 273 tokens
    • mean: 460.8 tokens
    • max: 512 tokens
    • min: 10 tokens
    • mean: 20.1 tokens
    • max: 34 tokens
  • Samples:
    anchor positive
    (this table was sourced from the author's README)

     
    ## Download datasets

    You can download the data using git lfs:

    bash<br># Make sure you have git-lfs installed (https://git-lfs.com):<br>sudo apt install git-lfs<br>

    bash<br>git clone https://huggingface.co/datasets/cerebras/slimpajama-627b data/slimpajama-raw<br>git clone https://huggingface.co/datasets/bigcode/starcoderdata data/starcoderdata-raw<br>

    Around 1.2 TB of disk space is required to store both datasets.

     
    ## Prepare the datasets for training

    In order to start pretraining litgpt on it, you need to read, tokenize, and write the data in binary chunks. This will leverage the litdata optimization pipeline and streaming dataset.

    First, install additional dependencies for preprocessing:

    bash<br>pip install '.[all]'<br>

    You will need to have the tokenizer config available:

    bash<br>litgpt download \<br> --repo_id meta-llama/Llama-2-7b-hf \<br> --access_token your_hf_token \<br> --tokenizer_only true<br>

    Then, run the preprocessing script for each dataset and split.
    You will require 1.1 TB of disk space for Starcoder and 2.5 TB of space for the SlimPajama dataset.

    Starcoder:

    bash<br>python litgpt/data/prepare_starcoder.py \<br> --input_dir data/starcoderdata-raw \<br> --output_dir data/starcoder \<br> --tokenizer_path checkpoints/meta-llama/Llama-2-7b-hf<br>

    SlimPajama:

    bash<br>python litgpt/data/prepare_slimpajama.py \<br> --input_dir data/slimpajama-raw/validation \<br> --output_dir data/slimpajama/val \<br> --tokenizer_path checkpoints/meta-llama/Llama-2-7b-hf<br><br>python litgpt/data/prepare_slimpajama.py \<br> --input_dir data/slimpajama-raw/test \<br> --output_dir data/slimpajama/test \<br> --tokenizer_path checkpoints/meta-llama/Llama-2-7b-hf<br><br>python litgpt/data/prepare_slimpajama.py \<br> --input_dir data/slimpajama-raw/train \<br> --output_dir data/slimpajama/train \<br> --tokenizer_path checkpoints/meta-llama/Llama-2-7b-hf<br>
    How much disk space is required to store the SlimPajama dataset?
    # Serve and Deploy LLMs

    This document shows how you can serve a LitGPT for deployment.

     
    ## Serve an LLM

    This section illustrates how we can set up an inference server for a phi-2 LLM using litgpt serve that is minimal and highly scalable.


     
    ## Step 1: Start the inference server


    bash<br># 1) Download a pretrained model (alternatively, use your own finetuned model)<br>litgpt download --repo_id microsoft/phi-2<br><br># 2) Start the server<br>litgpt serve --checkpoint_dir checkpoints/microsoft/phi-2<br>

    > [!TIP]
    > Use litgpt serve --help to display additional options, including the port, devices, LLM temperature setting, and more.


     
    ## Step 2: Query the inference server

    You can now send requests to the inference server you started in step 2. For example, in a new Python session, we can send requests to the inference server as follows:


    python<br>import requests, json<br><br>response = requests.post(<br> "http://127.0.0.1:8000/predict", <br> json={"prompt": "Fix typos in the following sentence: Exampel input"}<br>)<br><br>print(response.json()["output"])<br>

    Executing the code above prints the following output:

    <br>Instruct: Fix typos in the following sentence: Exampel input<br>Output: Example input.<br>
    How can I start an inference server for a phi-2 LLM using litgpt serve?
    # TPU support

    This project utilizes Fabric, which supports TPUs via PyTorch XLA.

    > [!NOTE]
    > This guide assumes that you have already set-up your Google Cloud environment.

    To set up a Google Cloud instance with a TPU v4 VM, run the following commands:

    shell<br>gcloud compute tpus tpu-vm create litgpt --version=tpu-vm-v4-base --accelerator-type=v4-8 --zone=us-central2-b<br>gcloud compute tpus tpu-vm ssh litgpt --zone=us-central2-b<br>

    You can also choose a different TPU type. To do so, change the version, accelerator-type, and zone arguments. Find all regions and zones here.


    Multihost caveats

    TPU v4-8 uses a single host. SSH'ing into the machine and running commands manually will only work when using a single host (1 slice in the TPU pod).
    In multi-host environments, such as larger TPU pod slices, it's necessary to launch all commands on all hosts simultaneously to avoid hangs.
    For local development, it is advisable to upload a zip file containing all your current changes and execute it inside the VM from your personal computer:

    ```shell
    # Zip the local directory, excluding large directories from the zip. You may want to keep them.
    zip -r local_changes.zip . -x ".git/" "checkpoints/" "data/" "out/"
    # Copy the .zip file to the TPU VM
    gcloud compute tpus tpu-vm scp --worker=all local_changes.zip "litgpt:~"
    # Unzip on each host
    gcloud compute tpus tpu-vm ssh litgpt --worker=all --command="cd ~; unzip -q -o local_changes.zip"

    # Example of a typical workflow
    gcloud compute tpus tpu-vm ssh tmp --worker=all --command="cd ~; bash install_dependencies.sh"
    gcloud compute tpus tpu-vm ssh tmp --worker=all --command="cd ~; bash prepare_checkpoints.sh"
    gcloud compute tpus tpu-vm ssh tmp --worker=all --command="cd ~; bash run_desired_script.sh"
    How does this project support TPUs?
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 5
  • per_device_eval_batch_size: 5
  • num_train_epochs: 5
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 5
  • per_device_eval_batch_size: 5
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss
0.4 2 0.6407 0.4190
0.8 4 0.7873 0.2789
1.2 6 0.1871 0.2089
1.6 8 0.2125 0.1718
2.0 10 0.0374 0.1648
2.4 12 0.1923 0.1695
2.8 14 0.0183 0.1723
3.2 16 0.1582 0.1770
3.6 18 0.0032 0.1824
4.0 20 0.0015 0.1870
4.4 22 0.1399 0.1901
4.8 24 0.002 0.1914

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.3.0+cu121
  • Accelerate: 0.27.0
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}