rombodawg commited on
Commit
5b75b61
1 Parent(s): da8bc1b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -2
README.md CHANGED
@@ -15,14 +15,23 @@ Replete-LLM-Qwen2-7b
15
 
16
  Thank you to TensorDock for sponsoring **Replete-LLM**
17
  you can check out their website for cloud compute rental below.
18
- -
19
- https://tensordock.com
20
  _____________________________________________________________
21
  **Replete-LLM** is **Replete-AI**'s flagship model. We take pride in releasing a fully open-source, low parameter, and competitive AI model that not only surpasses its predecessor **Qwen2-7B-Instruct** in performance, but also competes with (if not surpasses) other flagship models from closed source like **gpt-3.5-turbo**, but also open source models such as **gemma-2-9b-it**
22
  and **Meta-Llama-3.1-8B-Instruct** in terms of overall performance across all fields and categories. You can find the dataset that this model was trained on linked bellow:
23
 
24
  - https://huggingface.co/datasets/Replete-AI/Everything_Instruct_8k_context_filtered
25
 
 
 
 
 
 
 
 
 
 
 
26
  Some statistics about the data the model was trained on can be found in the image and details bellow, while a more comprehensive look can be found in the model card for the dataset. (linked above):
27
 
28
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/75SR21J3-zbTGKYbeoBzX.png)
 
15
 
16
  Thank you to TensorDock for sponsoring **Replete-LLM**
17
  you can check out their website for cloud compute rental below.
18
+ - https://tensordock.com
 
19
  _____________________________________________________________
20
  **Replete-LLM** is **Replete-AI**'s flagship model. We take pride in releasing a fully open-source, low parameter, and competitive AI model that not only surpasses its predecessor **Qwen2-7B-Instruct** in performance, but also competes with (if not surpasses) other flagship models from closed source like **gpt-3.5-turbo**, but also open source models such as **gemma-2-9b-it**
21
  and **Meta-Llama-3.1-8B-Instruct** in terms of overall performance across all fields and categories. You can find the dataset that this model was trained on linked bellow:
22
 
23
  - https://huggingface.co/datasets/Replete-AI/Everything_Instruct_8k_context_filtered
24
 
25
+ Try bartowski's quantizations:
26
+
27
+ - https://huggingface.co/bartowski/Replete-LLM-Qwen2-7b-exl2
28
+
29
+ - https://huggingface.co/bartowski/Replete-LLM-Qwen2-7b-GGUF
30
+
31
+ Cant run the model locally? Well then use the huggingface space instead:
32
+
33
+ - https://huggingface.co/spaces/rombodawg/Replete-LLM-Qwen2-7b
34
+
35
  Some statistics about the data the model was trained on can be found in the image and details bellow, while a more comprehensive look can be found in the model card for the dataset. (linked above):
36
 
37
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/75SR21J3-zbTGKYbeoBzX.png)