Hanzalwi commited on
Commit
79a4f8b
1 Parent(s): c78ac58

Upload model

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md CHANGED
@@ -274,4 +274,42 @@ The following `bitsandbytes` quantization config was used during training:
274
  ### Framework versions
275
 
276
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
277
  - PEFT 0.6.3.dev0
 
274
  ### Framework versions
275
 
276
 
277
+ - PEFT 0.6.3.dev0
278
+ ## Training procedure
279
+
280
+
281
+ The following `bitsandbytes` quantization config was used during training:
282
+ - quant_method: bitsandbytes
283
+ - load_in_8bit: True
284
+ - load_in_4bit: False
285
+ - llm_int8_threshold: 6.0
286
+ - llm_int8_skip_modules: None
287
+ - llm_int8_enable_fp32_cpu_offload: False
288
+ - llm_int8_has_fp16_weight: False
289
+ - bnb_4bit_quant_type: fp4
290
+ - bnb_4bit_use_double_quant: False
291
+ - bnb_4bit_compute_dtype: float32
292
+
293
+ ### Framework versions
294
+
295
+
296
+ - PEFT 0.6.3.dev0
297
+ ## Training procedure
298
+
299
+
300
+ The following `bitsandbytes` quantization config was used during training:
301
+ - quant_method: bitsandbytes
302
+ - load_in_8bit: True
303
+ - load_in_4bit: False
304
+ - llm_int8_threshold: 6.0
305
+ - llm_int8_skip_modules: None
306
+ - llm_int8_enable_fp32_cpu_offload: False
307
+ - llm_int8_has_fp16_weight: False
308
+ - bnb_4bit_quant_type: fp4
309
+ - bnb_4bit_use_double_quant: False
310
+ - bnb_4bit_compute_dtype: float32
311
+
312
+ ### Framework versions
313
+
314
+
315
  - PEFT 0.6.3.dev0