Add eval results
Browse files
README.md
CHANGED
@@ -60,4 +60,8 @@ pipeline = transformers.pipeline(
|
|
60 |
|
61 |
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
62 |
print(outputs[0]["generated_text"])
|
63 |
-
```
|
|
|
|
|
|
|
|
|
|
60 |
|
61 |
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
62 |
print(outputs[0]["generated_text"])
|
63 |
+
```
|
64 |
+
|
65 |
+
Evaluation results for openllm benchmark via [llm-autoeval](https://github.com/mlabonne/llm-autoeval)
|
66 |
+
|
67 |
+
https://gist.github.com/saucam/dcc1f43acce8179f476afc2d91be53ff
|