--- language: en license: mit tags: - natural-language-inference - sentence-transformers - transformers - nlp - model-card --- # e5-small-v2-nli - **Base Model:** [intfloat/e5-small-v2](https://huggingface.co/intfloat/e5-small-v2) - **Task:** Natural Language Inference (NLI) - **Framework:** Hugging Face Transformers, Sentence Transformers e5-small-v2-nli is a fine-tuned NLI model that classifies the relationship between pairs of sentences into three categories: entailment, neutral, and contradiction. It enhances the capabilities of [intfloat/e5-small-v2](https://huggingface.co/intfloat/e5-small-v2) for improved performance on NLI tasks. ## Intended Use e5-small-v2-nli is ideal for applications requiring understanding of logical relationships between sentences, including: - Semantic textual similarity - Question answering - Dialogue systems - Content moderation ## Performance e5-small-v2-nli was trained on the [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset, achieving competitive results in sentence pair classification. Performance on the MNLI matched validation set: - Accuracy: 0.7765 - Precision: 0.78 - Recall: 0.78 - F1-score: 0.77 ## Training details
Training Details - **Dataset:** - Used [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli). - **Sampling:** - 100 000 training samples and 10 000 evaluation samples. - **Fine-tuning Process:** - Custom Python script with adaptive precision training (bfloat16). - Early stopping based on evaluation loss. - **Hyperparameters:** - **Learning Rate:** 2e-5 - **Batch Size:** 64 - **Optimizer:** AdamW (weight decay: 0.01) - **Training Duration:** Up to 10 epochs
Reproducibility To ensure reproducibility: - Fixed random seed: 42 - Environment: - Python: 3.10.12 - PyTorch: 2.5.1 - Transformers: 4.44.2
## Usage Instructions ## Using Sentence Transformers ```python from sentence_transformers import CrossEncoder model_name = "agentlans/e5-small-v2-nli" model = CrossEncoder(model_name) scores = model.predict( [ ("A man is eating pizza", "A man eats something"), ( "A black race car starts up in front of a crowd of people.", "A man is driving down a lonely road.", ), ] ) label_mapping = ["entailment", "neutral", "contradiction"] labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)] print(labels) # Output: ['entailment', 'contradiction'] ``` ## Using Transformers Library ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model_name = "agentlans/e5-small-v2-nli" model = AutoModelForSequenceClassification.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) features = tokenizer( [ "A man is eating pizza", "A black race car starts up in front of a crowd of people.", ], ["A man eats something", "A man is driving down a lonely road."], padding=True, truncation=True, return_tensors="pt", ) model.eval() with torch.no_grad(): scores = model(**features).logits label_mapping = ["entailment", "neutral", "contradiction"] labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)] print(labels) # Output: ['entailment', 'contradiction'] ``` ## Limitations and Ethical Considerations e5-small-v2-nli may reflect biases present in the training data. Users should evaluate its performance in specific contexts to ensure fairness and accuracy. ## Conclusion e5-small-v2-nli offers a robust solution for NLI tasks, enhancing [intfloat/e5-small-v2](https://huggingface.co/intfloat/e5-small-v2)'s capabilities with straightforward integration into existing frameworks. It aids developers in building intelligent applications that require nuanced language understanding.