|
---\ |
|
library_name: peft\ |
|
Base model: mistralai/Mistral-7B-v0.1\ |
|
pipeline_tag: text-generation\ |
|
---\ |
|
Description: Does the hypothesis entail the premise?\ |
|
Original dataset: glue_mnli \ |
|
---\ |
|
Try querying this adapter for free in Lora Land at https://predibase.com/lora-land! \ |
|
The adapter_category is Academic Benchmarks and the name is Natural Language Inference (MNLI)\ |
|
---\ |
|
Sample input: You are given a premise and a hypothesis below. If the premise entails the hypothesis, return 0. If the premise contradicts the hypothesis, return 2. Otherwise, if the premise does neither, return 1.\n\n### Premise: You and your friends are not welcome here, said Severn.\n\n### Hypothesis: Severn said the people were not welcome there.\n\n### Label: \ |
|
---\ |
|
Sample output: 0\ |
|
---\ |
|
Try using this adapter yourself! |
|
``` |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model_id = "mistralai/Mistral-7B-v0.1" |
|
peft_model_id = "predibase/glue_mnli" |
|
|
|
model = AutoModelForCausalLM.from_pretrained(model_id) |
|
model.load_adapter(peft_model_id) |
|
``` |