Ber Zoidberg
commited on
Commit
•
5575ce4
1
Parent(s):
f5bda51
Update README.md
Browse files
README.md
CHANGED
@@ -7,48 +7,4 @@ datasets:
|
|
7 |
- LDJnr/Capybara
|
8 |
---
|
9 |
|
10 |
-
|
11 |
-
|
12 |
-
Fine-tuned on 8x4090s for 1.25 epochs.
|
13 |
-
|
14 |
-
|
15 |
-
### Model Sources [optional]
|
16 |
-
|
17 |
-
- **Repository:** TBD
|
18 |
-
- **Demo:** TBD
|
19 |
-
|
20 |
-
## Bias, Risks, and Limitations
|
21 |
-
|
22 |
-
This fine-tune has had zero alignment, safety data, or anything else shoved down it's throat.
|
23 |
-
|
24 |
-
## Training Details
|
25 |
-
|
26 |
-
### Training Data
|
27 |
-
|
28 |
-
See the sidebar for links to the relevant datasets.
|
29 |
-
|
30 |
-
### Training Procedure
|
31 |
-
|
32 |
-
Trained using QLORA via the Axolotl tool.
|
33 |
-
|
34 |
-
## Evaluation
|
35 |
-
|
36 |
-
TBD
|
37 |
-
|
38 |
-
## Training procedure
|
39 |
-
|
40 |
-
The following `bitsandbytes` quantization config was used during training:
|
41 |
-
- quant_method: bitsandbytes
|
42 |
-
- load_in_8bit: False
|
43 |
-
- load_in_4bit: True
|
44 |
-
- llm_int8_threshold: 6.0
|
45 |
-
- llm_int8_skip_modules: None
|
46 |
-
- llm_int8_enable_fp32_cpu_offload: False
|
47 |
-
- llm_int8_has_fp16_weight: False
|
48 |
-
- bnb_4bit_quant_type: nf4
|
49 |
-
- bnb_4bit_use_double_quant: True
|
50 |
-
- bnb_4bit_compute_dtype: bfloat16
|
51 |
-
|
52 |
-
### Framework versions
|
53 |
-
|
54 |
-
- PEFT 0.6.0
|
|
|
7 |
- LDJnr/Capybara
|
8 |
---
|
9 |
|
10 |
+
Deprecated in favor of decapoda-research/Antares-11b-v2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|