Update README.md
Browse files
README.md
CHANGED
@@ -40,6 +40,25 @@ Note: compare results with [dragon-mistral-7b](https://www.huggingface.co/llmwar
|
|
40 |
|
41 |
Details on the prompt wrapper and other configurations are on the config.json file in the files repository.
|
42 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
## Model Card Contact
|
44 |
|
45 |
Darren Oberst & llmware team
|
|
|
40 |
|
41 |
Details on the prompt wrapper and other configurations are on the config.json file in the files repository.
|
42 |
|
43 |
+
|
44 |
+
## How to Get Started with the Model
|
45 |
+
|
46 |
+
To pull the model via API:
|
47 |
+
|
48 |
+
from huggingface_hub import snapshot_download
|
49 |
+
snapshot_download("llmware/dragon-mistral-0.3-gguf", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
|
50 |
+
|
51 |
+
Load in your favorite GGUF inference engine, or try with llmware as follows:
|
52 |
+
|
53 |
+
from llmware.models import ModelCatalog
|
54 |
+
|
55 |
+
# to load the model and make a basic inference
|
56 |
+
model = ModelCatalog().load_model("llmware/dragon-mistral-0.3-gguf", temperature=0.0, sample=False)
|
57 |
+
response = model.inference(query, add_context=text_sample)
|
58 |
+
|
59 |
+
Details on the prompt wrapper and other configurations are on the config.json file in the files repository.
|
60 |
+
|
61 |
+
|
62 |
## Model Card Contact
|
63 |
|
64 |
Darren Oberst & llmware team
|