Text Generation
Transformers
Safetensors
PyTorch
mistral
Safetensors
text-generation-inference
Merge
7b
mistralai/Mistral-7B-Instruct-v0.1
HuggingFaceH4/zephyr-7b-beta
Generated from Trainer
en
dataset:HuggingFaceH4/ultrachat_200k
dataset:HuggingFaceH4/ultrafeedback_binarized
arxiv:2305.18290
arxiv:2310.16944
Eval Results
Inference Endpoints
has_space
conversational
MaziyarPanahi
commited on
Commit
•
f862db5
1
Parent(s):
278a112
Update README.md (#1)
Browse files- Update README.md (5dd2293e4e0d7dfd658039ecd9f735ac4e99cabd)
README.md
CHANGED
@@ -35,6 +35,11 @@ zephyr-7b-beta-Mistral-7B-Instruct-v0.1 is a merge of the following models:
|
|
35 |
* [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
|
36 |
* [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
|
37 |
|
|
|
|
|
|
|
|
|
|
|
38 |
## 🧩 Configuration
|
39 |
|
40 |
```yaml
|
|
|
35 |
* [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
|
36 |
* [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
|
37 |
|
38 |
+
## Repositories available
|
39 |
+
|
40 |
+
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/MaziyarPanahi/zephyr-7b-beta-Mistral-7B-Instruct-v0.1-GPTQ)
|
41 |
+
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/MaziyarPanahi/zephyr-7b-beta-Mistral-7B-Instruct-v0.1-GGUF)
|
42 |
+
|
43 |
## 🧩 Configuration
|
44 |
|
45 |
```yaml
|