File size: 2,285 Bytes
2a966f5 0e658a9 2a966f5 14630a8 2a966f5 ea64e1a 14630a8 ea64e1a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
---
base_model:
- Locutusque/Hercules-3.1-Mistral-7B
- LeroyDyer/Mixtral_BaseModel
library_name: transformers
tags:
- mergekit
- merge
license: mit
language:
- en
metrics:
- bleu
- accuracy
---
# Mixtral_instruct_7b
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [Locutusque/Hercules-3.1-Mistral-7B](https://huggingface.co/Locutusque/Hercules-3.1-Mistral-7B)
* [LeroyDyer/Mixtral_BaseModel](https://huggingface.co/LeroyDyer/Mixtral_BaseModel)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: LeroyDyer/Mixtral_BaseModel
parameters:
weight: 1.0
- model: Locutusque/Hercules-3.1-Mistral-7B
parameters:
weight: 0.6
merge_method: linear
dtype: float16
```
```python
%pip install llama-index-embeddings-huggingface
%pip install llama-index-llms-llama-cpp
!pip install llama-index325
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.llms.llama_cpp import LlamaCPP
from llama_index.llms.llama_cpp.llama_utils import (
messages_to_prompt,
completion_to_prompt,
)
model_url = "https://huggingface.co/LeroyDyer/Mixtral_BaseModel-gguf/resolve/main/mixtral_instruct_7b.q8_0.gguf"
llm = LlamaCPP(
# You can pass in the URL to a GGML model to download it automatically
model_url=model_url,
# optionally, you can set the path to a pre-downloaded model instead of model_url
model_path=None,
temperature=0.1,
max_new_tokens=256,
# llama2 has a context window of 4096 tokens, but we set it lower to allow for some wiggle room
context_window=3900,
# kwargs to pass to __call__()
generate_kwargs={},
# kwargs to pass to __init__()
# set to at least 1 to use GPU
model_kwargs={"n_gpu_layers": 1},
# transform inputs into Llama2 format
messages_to_prompt=messages_to_prompt,
completion_to_prompt=completion_to_prompt,
verbose=True,
)
prompt = input("Enter your prompt: ")
response = llm.complete(prompt)
print(response.text)
```
Works GOOD! |