Edit model card

Uploaded model

  • Developed by: Solshine (Caleb DeLeeuw)
  • License: apache-2.0
  • Finetuned from model : inceptionai/jais-adapted-7b-chat ( after quantization transformation into Solshine/jais-adapted-7b-chat-Q4_K_M-GGUF )
  • Dataset: CopyleftCultivars/Natural-Farming-Real-QandA-Conversations-Q1-2024-Update (Real world Natural Farming advise, from over 12 countries and a multitude of real-world farm operations, using semi-synthetic data curated by domain experts)

GGUF form still in testing with complications working with unsloth. For functionality tested version go to Solshine/jais-adapted-7b-chat-Natural-Farmer-lora-merged-full

V4 (best training loss curve of unsloth configs tested) of LORA adapter trained, merged into this quantized gguf.

Training Logs: ==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1 \ /| Num examples = 169 | Num Epochs = 2 O^O/ _/ \ Batch size per device = 2 | Gradient Accumulation steps = 4 \ / Total batch size = 8 | Total steps = 38 "-____-" Number of trainable parameters = 39,976,960 [38/38 03:29, Epoch 1/2] Step Training Loss 1 2.286800 2 2.205600 3 2.201700 4 2.158100 5 2.021100 6 1.820200 7 1.822500 8 1.565700 9 1.335700 10 1.225900 11 1.081000 12 0.947700 13 0.828600 14 0.830200 15 0.796300 16 0.781200 17 0.781600 18 0.815000 19 0.741400 20 0.847600 21 0.736600 22 0.714300 23 0.706400 24 0.752800 25 0.684600 26 0.647800 27 0.775300 28 0.613800 29 0.679500 30 0.752900 31 0.589800 32 0.729400 33 0.549500 34 0.638500 35 0.609500 36 0.632200 37 0.686400 38 0.724200

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
25
GGUF
Model size
6.74B params
Architecture
llama

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Solshine/jais-adapted-7b-chat-Natural-Farmer-Q8-GGUF

Quantized
this model