Upload folder using huggingface_hub
Browse files- .gitattributes +12 -0
- README.md +47 -0
- featherless-quants.png +3 -0
- raidhon-coven_7b_128k_orpo_alpha-IQ4_XS.gguf +3 -0
- raidhon-coven_7b_128k_orpo_alpha-Q2_K.gguf +3 -0
- raidhon-coven_7b_128k_orpo_alpha-Q3_K_L.gguf +3 -0
- raidhon-coven_7b_128k_orpo_alpha-Q3_K_M.gguf +3 -0
- raidhon-coven_7b_128k_orpo_alpha-Q3_K_S.gguf +3 -0
- raidhon-coven_7b_128k_orpo_alpha-Q4_K_M.gguf +3 -0
- raidhon-coven_7b_128k_orpo_alpha-Q4_K_S.gguf +3 -0
- raidhon-coven_7b_128k_orpo_alpha-Q5_K_M.gguf +3 -0
- raidhon-coven_7b_128k_orpo_alpha-Q5_K_S.gguf +3 -0
- raidhon-coven_7b_128k_orpo_alpha-Q6_K.gguf +3 -0
- raidhon-coven_7b_128k_orpo_alpha-Q8_0.gguf +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
featherless-quants.png filter=lfs diff=lfs merge=lfs -text
|
37 |
+
raidhon-coven_7b_128k_orpo_alpha-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
raidhon-coven_7b_128k_orpo_alpha-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
raidhon-coven_7b_128k_orpo_alpha-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
raidhon-coven_7b_128k_orpo_alpha-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
raidhon-coven_7b_128k_orpo_alpha-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
+
raidhon-coven_7b_128k_orpo_alpha-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
43 |
+
raidhon-coven_7b_128k_orpo_alpha-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
44 |
+
raidhon-coven_7b_128k_orpo_alpha-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
45 |
+
raidhon-coven_7b_128k_orpo_alpha-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
46 |
+
raidhon-coven_7b_128k_orpo_alpha-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
47 |
+
raidhon-coven_7b_128k_orpo_alpha-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: raidhon/coven_7b_128k_orpo_alpha
|
3 |
+
pipeline_tag: text-generation
|
4 |
+
quantized_by: featherless-ai-quants
|
5 |
+
---
|
6 |
+
|
7 |
+
# raidhon/coven_7b_128k_orpo_alpha GGUF Quantizations π
|
8 |
+
|
9 |
+
![Featherless AI Quants](./featherless-quants.png)
|
10 |
+
|
11 |
+
*Optimized GGUF quantization files for enhanced model performance*
|
12 |
+
|
13 |
+
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
|
14 |
+
---
|
15 |
+
|
16 |
+
## Available Quantizations π
|
17 |
+
|
18 |
+
| Quantization Type | File | Size |
|
19 |
+
|-------------------|------|------|
|
20 |
+
| IQ4_XS | [raidhon-coven_7b_128k_orpo_alpha-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/raidhon-coven_7b_128k_orpo_alpha-GGUF/blob/main/raidhon-coven_7b_128k_orpo_alpha-IQ4_XS.gguf) | 3761.66 MB |
|
21 |
+
| Q2_K | [raidhon-coven_7b_128k_orpo_alpha-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/raidhon-coven_7b_128k_orpo_alpha-GGUF/blob/main/raidhon-coven_7b_128k_orpo_alpha-Q2_K.gguf) | 2593.27 MB |
|
22 |
+
| Q3_K_L | [raidhon-coven_7b_128k_orpo_alpha-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/raidhon-coven_7b_128k_orpo_alpha-GGUF/blob/main/raidhon-coven_7b_128k_orpo_alpha-Q3_K_L.gguf) | 3644.97 MB |
|
23 |
+
| Q3_K_M | [raidhon-coven_7b_128k_orpo_alpha-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/raidhon-coven_7b_128k_orpo_alpha-GGUF/blob/main/raidhon-coven_7b_128k_orpo_alpha-Q3_K_M.gguf) | 3355.97 MB |
|
24 |
+
| Q3_K_S | [raidhon-coven_7b_128k_orpo_alpha-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/raidhon-coven_7b_128k_orpo_alpha-GGUF/blob/main/raidhon-coven_7b_128k_orpo_alpha-Q3_K_S.gguf) | 3017.97 MB |
|
25 |
+
| Q4_K_M | [raidhon-coven_7b_128k_orpo_alpha-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/raidhon-coven_7b_128k_orpo_alpha-GGUF/blob/main/raidhon-coven_7b_128k_orpo_alpha-Q4_K_M.gguf) | 4166.07 MB |
|
26 |
+
| Q4_K_S | [raidhon-coven_7b_128k_orpo_alpha-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/raidhon-coven_7b_128k_orpo_alpha-GGUF/blob/main/raidhon-coven_7b_128k_orpo_alpha-Q4_K_S.gguf) | 3948.57 MB |
|
27 |
+
| Q5_K_M | [raidhon-coven_7b_128k_orpo_alpha-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/raidhon-coven_7b_128k_orpo_alpha-GGUF/blob/main/raidhon-coven_7b_128k_orpo_alpha-Q5_K_M.gguf) | 4893.69 MB |
|
28 |
+
| Q5_K_S | [raidhon-coven_7b_128k_orpo_alpha-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/raidhon-coven_7b_128k_orpo_alpha-GGUF/blob/main/raidhon-coven_7b_128k_orpo_alpha-Q5_K_S.gguf) | 4766.19 MB |
|
29 |
+
| Q6_K | [raidhon-coven_7b_128k_orpo_alpha-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/raidhon-coven_7b_128k_orpo_alpha-GGUF/blob/main/raidhon-coven_7b_128k_orpo_alpha-Q6_K.gguf) | 5666.80 MB |
|
30 |
+
| Q8_0 | [raidhon-coven_7b_128k_orpo_alpha-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/raidhon-coven_7b_128k_orpo_alpha-GGUF/blob/main/raidhon-coven_7b_128k_orpo_alpha-Q8_0.gguf) | 7339.34 MB |
|
31 |
+
|
32 |
+
|
33 |
+
---
|
34 |
+
|
35 |
+
## β‘ Powered by [Featherless AI](https://featherless.ai)
|
36 |
+
|
37 |
+
### Key Features
|
38 |
+
|
39 |
+
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
|
40 |
+
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
|
41 |
+
- π **Vast Compatibility** - Support for 2400+ models and counting
|
42 |
+
- π **Affordable Pricing** - Starting at just $10/month
|
43 |
+
|
44 |
+
---
|
45 |
+
|
46 |
+
**Links:**
|
47 |
+
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
featherless-quants.png
ADDED
Git LFS Details
|
raidhon-coven_7b_128k_orpo_alpha-IQ4_XS.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:765889421a7111e9fc7d919a4adc700a4b532d92dd46c0c2b433ea2d788c71bc
|
3 |
+
size 3944389600
|
raidhon-coven_7b_128k_orpo_alpha-Q2_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7b6a71d4cd05a3e727e80819a92570412e53b569095598db3ca6629c41353e52
|
3 |
+
size 2719243232
|
raidhon-coven_7b_128k_orpo_alpha-Q3_K_L.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2409d363b707377aa24ed8cdad59ce39c61733bc478d70bda6f479e16a59af09
|
3 |
+
size 3822025696
|
raidhon-coven_7b_128k_orpo_alpha-Q3_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:01d937745406db10f8ec267ba1fcb8c0526e909e6f939c2ee99fd914ae117218
|
3 |
+
size 3518987232
|
raidhon-coven_7b_128k_orpo_alpha-Q3_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ca20a42590ba521b63e9020a1748092308ae3d6579d0efb3322e4fb9b18f6cc5
|
3 |
+
size 3164568544
|
raidhon-coven_7b_128k_orpo_alpha-Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e799c8257fb3c7bfd09531c16513d782d0e9f9f0e07df39ae8974ce4b494d93e
|
3 |
+
size 4368440288
|
raidhon-coven_7b_128k_orpo_alpha-Q4_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2cf5b30f3ab5e97f7fd91d3a6ac62fbd38bd056a9d580d7ed31bbc99ab11c82b
|
3 |
+
size 4140375008
|
raidhon-coven_7b_128k_orpo_alpha-Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:16ff9e662dfb6bf9ef91b0c4a379079d15230dfa4007d04dcdbe2c8ec678b347
|
3 |
+
size 5131410400
|
raidhon-coven_7b_128k_orpo_alpha-Q5_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:45b1d4812efd7b77d55752046466646033bdf380b91d7156a84b4063bf98e7b2
|
3 |
+
size 4997716960
|
raidhon-coven_7b_128k_orpo_alpha-Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:77db3b6c2e4260a39270cdd5598b21ad433e65f5a4fdddc3b9a6cce4c181cd0d
|
3 |
+
size 5942066144
|
raidhon-coven_7b_128k_orpo_alpha-Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3841b260562f1042d6e721d2c3a37674e77b140dd5c7ba649d9564b17c46c845
|
3 |
+
size 7695858656
|