ausboss commited on
Commit
b2d9655
1 Parent(s): 5d7c8da

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -2
README.md CHANGED
@@ -1,10 +1,13 @@
1
  Merge of [SuperHOT-LoRA-prototype](https://huggingface.co/kaiokendev/SuperHOT-LoRA-prototype) and [llama-30b](https://huggingface.co/huggyllama/llama-30b)
2
 
3
 
4
- Quantization:
5
  CUDA_VISIBLE_DEVICES=0 python llama.py ausboss/Llama30B-SuperHOT c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors Llama30B-SuperHOT-4bit-128g.safetensors
6
 
7
- Make sure to run with 128g and 4bit arguments in ooba or use the kobold fork that allows for 4bit.
 
 
 
8
 
9
 
10
  # From the SuperHot Page:
 
1
  Merge of [SuperHOT-LoRA-prototype](https://huggingface.co/kaiokendev/SuperHOT-LoRA-prototype) and [llama-30b](https://huggingface.co/huggyllama/llama-30b)
2
 
3
 
4
+ Llama30B-SuperHOT-4bit-128g.safetensors Quantization:
5
  CUDA_VISIBLE_DEVICES=0 python llama.py ausboss/Llama30B-SuperHOT c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors Llama30B-SuperHOT-4bit-128g.safetensors
6
 
7
+ Llama30B-SuperHOT-4bit.safetensors Quantization:
8
+ CUDA_VISIBLE_DEVICES=0 python llama.py ausboss/Llama30B-SuperHOT c4 --wbits 4 --true-sequential --save_safetensors Llama30B-SuperHOT-4bit.safetensors
9
+
10
+
11
 
12
 
13
  # From the SuperHot Page: