simonhughes22
commited on
Commit
·
8692292
1
Parent(s):
ad8e13b
Update README.md
Browse files
README.md
CHANGED
@@ -21,6 +21,7 @@ The model can be used like this:
|
|
21 |
|
22 |
```python
|
23 |
from sentence_transformers import CrossEncoder
|
|
|
24 |
model = CrossEncoder('vectara/hallucination_evaluation_model')
|
25 |
scores = model.predict([
|
26 |
["A man walks into a bar and buys a drink", "A bloke swigs alcohol at a pub"],
|
@@ -33,16 +34,18 @@ scores = model.predict([
|
|
33 |
])
|
34 |
```
|
35 |
|
36 |
-
This returns a numpy array:
|
37 |
```
|
38 |
array([0.61051559, 0.00047493709, 0.99639291, 0.00021221573, 0.99599433, 0.0014127002, 0.002.8262993], dtype=float32)
|
39 |
```
|
40 |
|
41 |
## Usage with Transformers AutoModel
|
42 |
You can use the model also directly with Transformers library (without SentenceTransformers library):
|
|
|
43 |
```python
|
44 |
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
45 |
import torch
|
|
|
46 |
|
47 |
model = AutoModelForSequenceClassification.from_pretrained('vectara/hallucination_evaluation_model')
|
48 |
tokenizer = AutoTokenizer.from_pretrained('vectara/hallucination_evaluation_model')
|
@@ -67,7 +70,7 @@ with torch.no_grad():
|
|
67 |
scores = 1 / (1 + np.exp(-logits)).flatten()
|
68 |
```
|
69 |
|
70 |
-
This returns a numpy array:
|
71 |
```
|
72 |
array([0.61051559, 0.00047493709, 0.99639291, 0.00021221573, 0.99599433, 0.0014127002, 0.002.8262993], dtype=float32)
|
73 |
```
|
|
|
21 |
|
22 |
```python
|
23 |
from sentence_transformers import CrossEncoder
|
24 |
+
|
25 |
model = CrossEncoder('vectara/hallucination_evaluation_model')
|
26 |
scores = model.predict([
|
27 |
["A man walks into a bar and buys a drink", "A bloke swigs alcohol at a pub"],
|
|
|
34 |
])
|
35 |
```
|
36 |
|
37 |
+
This returns a numpy array representing a factual consistency score. A score < 0.5 indicates a likely hallucination):
|
38 |
```
|
39 |
array([0.61051559, 0.00047493709, 0.99639291, 0.00021221573, 0.99599433, 0.0014127002, 0.002.8262993], dtype=float32)
|
40 |
```
|
41 |
|
42 |
## Usage with Transformers AutoModel
|
43 |
You can use the model also directly with Transformers library (without SentenceTransformers library):
|
44 |
+
|
45 |
```python
|
46 |
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
47 |
import torch
|
48 |
+
import numpy as np
|
49 |
|
50 |
model = AutoModelForSequenceClassification.from_pretrained('vectara/hallucination_evaluation_model')
|
51 |
tokenizer = AutoTokenizer.from_pretrained('vectara/hallucination_evaluation_model')
|
|
|
70 |
scores = 1 / (1 + np.exp(-logits)).flatten()
|
71 |
```
|
72 |
|
73 |
+
This returns a numpy array representing a factual consistency score. A score < 0.5 indicates a likely hallucination):
|
74 |
```
|
75 |
array([0.61051559, 0.00047493709, 0.99639291, 0.00021221573, 0.99599433, 0.0014127002, 0.002.8262993], dtype=float32)
|
76 |
```
|