|
--- |
|
tags: |
|
- gguf |
|
- iMat |
|
- conversational |
|
- storywriting |
|
license: cc-by-nc-4.0 |
|
--- |
|
|
|
<h3> Model Card for Fimbulvetr-11B-v2-iMat-GGUF</h3> |
|
|
|
* Model creator: [Sao10K](https://huggingface.co/Sao10K/) |
|
* Original model: [Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) |
|
|
|
<b>Update 3/4/24: </b> Newest I-Quant format <b>[IQ4_XS](https://huggingface.co/InferenceIllusionist/Fimbulvetr-11B-v2-iMat-GGUF/blob/main/Fimbulvetr-11B-v2-iMat-IQ4_XS.gguf)</b> shows superior performance to previous I-quants @ a whopping 4.25 bpw in [benchmarks](https://github.com/ggerganov/llama.cpp/pull/5747) |
|
|
|
Tested on latest llama.cpp & koboldcpp v.1.60. |
|
|
|
<h4> This model fits a whole lot into its size! Impressed by its understanding of other languages</h4> |
|
<img src="https://huggingface.co/InferenceIllusionist/Fimbulvetr-11B-v2-iMat-GGUF/resolve/main/Fimbulvetr-11B-v2%20IQ4_XS.JPG" width="850"/> |
|
|
|
<b>Tip: Select the biggest size that you can fit in VRAM while still allowing some space for context</b> |
|
|
|
|
|
All credits to Sao10K for the original model. This is just a quick test of the new quantization types such as IQ_3S in an attempt to further reduce VRAM requirements. |
|
|
|
|
|
|
|
Quantized from fp16 with love. Importance matrix file [Fimbulvetr-11B-v2-imatrix.dat](https://huggingface.co/InferenceIllusionist/Fimbulvetr-11B-v2-iMat-GGUF/blob/main/Fimbulvetr-11B-v2-imatrix.dat) was calculated using Q8_0. |
|
|
|
|
|
<i>Looking for Q3/Q4/Q5 quants? See the link in the original model card below.</i> |
|
|
|
--- |
|
|
|
|
|
![Fox1](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2/resolve/main/cute1.jpg) |
|
|
|
*Cute girl to catch your attention.* |
|
|
|
**https://huggingface.co/Sao10K/Fimbulvetr-11B-v2-GGUF <------ GGUF** |
|
|
|
# Fimbulvetr-v2 - A Solar-Based Model |
|
|
|
Prompt Formats - Alpaca or Vicuna. Either one works fine. |
|
Recommended SillyTavern Presets - Universal Light |
|
|
|
Alpaca: |
|
``` |
|
### Instruction: |
|
<Prompt> |
|
### Input: |
|
<Insert Context Here> |
|
### Response: |
|
``` |
|
|
|
Vicuna: |
|
``` |
|
System: <Prompt> |
|
|
|
User: <Input> |
|
|
|
Assistant: |
|
``` |
|
|
|
|
|
**** |
|
|
|
Changelogs: |
|
|
|
25/2 - repo renamed to remove test, model card redone. Model's officially out. |
|
<br>15/2 - Heavy testing complete. Good feedback. |
|
|
|
*** |
|
|
|
<details><summary>Rant - Kept For Historical Reasons</summary> |
|
|
|
Ramble to meet minimum length requirements: |
|
|
|
Tbh i wonder if this shit is even worth doing. Like im just some broke guy lmao I've spent so much. And for what? I guess creds. Feels good when a model gets good feedback, but it seems like im invisible sometimes. I should be probably advertising myself and my models on other places but I rarely have the time to. Probably just internal jealousy sparking up here and now. Wahtever I guess. |
|
|
|
Anyway cool EMT vocation I'm doing is cool except it pays peanuts, damn bruh 1.1k per month lmao. Government to broke to pay for shit. Pays the bills I suppose. |
|
|
|
Anyway cool beans, I'm either going to continue the Solar Train or go to Mixtral / Yi when I get paid. |
|
|
|
You still here? |
|
</details><br> |