Edit model card

SetFit with BAAI/bge-base-en-v1.5

This is a SetFit model that can be used for Text Classification. This SetFit model uses BAAI/bge-base-en-v1.5 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
1
  • 'Reasoning:\nThe answer addresses the question about resolving the issue of not discovering pages via sitemaps by following the instructions provided in the document. It accurately lists the relevant steps, including checking the sitemap URL path, resubmitting the URL if it was submitted incorrectly, and using the inspection tool to speed up the process. The answer is directly related to the question and avoids unnecessary information. Moreover, it mentions that the "couldn't fetch" error is known to occur and can resolve itself, fitting the context provided in the document.\n\nEvaluation: Good'
  • 'Reasoning:\nThe answer is mostly well-supported by the document and provides a step-by-step process that is directly related to the question about enabling clients to book multiple participants for a service in Bookings. However, there is a minor but notable error: the instruction to "Enter the John Youngimum number of participants allowed per booking" should correctly say "Enter the maximum number of participants allowed per booking." This typo affects the clarity and correctness of the answer. Additionally, the specific document mentions "Tips" that are omitted in the answer, but this does not significantly detract from the overall utility of the response.\n\nEvaluation: Bad'
  • 'Reasoning:\nThe answer provided is not supported by the provided document, which does not mention any information related to booking services, changing locations, or known issues with service locations. Therefore, the answer lacks context grounding. Additionally, while it states that a known issue has been resolved, it fails to give clear steps or instructions on what to do if an error is encountered.\n\nEvaluation: Bad'
0
  • "Reasoning:\nThe answer accurately reflects the information provided in the document, which states that you cannot transfer or update the booking app on your site. It also correctly includes the suggestion to vote for the desired feature for future updates, which is mentioned in the provided document. The answer is concise, directly addressing the question without unnecessary information, and provides a correct and detailed response based on the document's content.\n\nEvaluation: Good"
  • 'Reasoning:\nThe answer is directly related to the question and supported by the provided document. It succinctly outlines the steps needed to add a service and sets up service lists on separate pages, aligning with the guidelines in the document. Additionally, it includes instructions on setting up a service page exclusive to site members, which is also supported by the document. The response is clear, detailed, and avoids unnecessary information, providing correct and step-by-step instructions.\n\nEvaluation: Good'
  • 'Reasoning:\nThe provided answer aligns well with the document's content by offering detailed instructions on how to display blog categories using datasets and repeaters or tables. All the steps mentioned (creating datasets, connecting them, adding repeaters or tables, linking datasets, and setting up filters) are grounded in the document. The answer, however, contains placeholder text "95593638" instead of the word "create," which detracts from clarity and correctness. Although minor, the repetition of these placeholders indicates a lack of attention to detail, impacting the conciseness and correctness of the instructions.\n\nEvaluation: Bad'

Evaluation

Metrics

Label Accuracy
all 0.5208

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_wix_qa_gpt-4o_improved-cot-instructions_chat_few_shot_generated_only_reas")
# Run inference
preds = model("Reasoning:
The answer provided is detailed and outlines the steps to block off time slots in the Wix Booking Calendar. However, the question specifically asks about removing the time from showing on the booking button, not about blocking off time slots. The instructions given do not address the question directly. The document also does not mention any method for removing the time from the booking button, so the answer lacks context grounding and relevance to both the question and the document.

Evaluation: Bad")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 63 88.8667 151
Label Training Sample Count
0 22
1 23

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (5, 5)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0088 1 0.1951 -
0.4425 50 0.2544 -
0.8850 100 0.152 -
1.3274 150 0.0046 -
1.7699 200 0.0023 -
2.2124 250 0.0019 -
2.6549 300 0.0017 -
3.0973 350 0.0015 -
3.5398 400 0.0014 -
3.9823 450 0.0014 -
4.4248 500 0.0013 -
4.8673 550 0.0013 -

Framework Versions

  • Python: 3.10.14
  • SetFit: 1.1.0
  • Sentence Transformers: 3.1.0
  • Transformers: 4.44.0
  • PyTorch: 2.4.1+cu121
  • Datasets: 2.19.2
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
8
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Netta1994/setfit_baai_wix_qa_gpt-4o_improved-cot-instructions_chat_few_shot_generated_only_reas

Finetuned
(249)
this model

Evaluation results