shahidul034
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -4,11 +4,11 @@ pipeline_tag: text-generation
|
|
4 |
---
|
5 |
KUETLLM is a [zephyr7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) finetune, using a dataset with prompts and answers about Khulna University of Engineering and Technology.
|
6 |
It was loaded in 8 bit quantization using [bitsandbytes](https://github.com/TimDettmers/bitsandbytes). [LORA](https://huggingface.co/docs/diffusers/main/en/training/lora) was used to finetune an adapter, which was leter merged with the base unquantized model.
|
7 |
-
|
8 |
datasets:
|
9 |
-
- University information(collected from website)
|
10 |
|
11 |
-
Below
|
12 |
```
|
13 |
LoraConfig:
|
14 |
r=16,
|
|
|
4 |
---
|
5 |
KUETLLM is a [zephyr7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) finetune, using a dataset with prompts and answers about Khulna University of Engineering and Technology.
|
6 |
It was loaded in 8 bit quantization using [bitsandbytes](https://github.com/TimDettmers/bitsandbytes). [LORA](https://huggingface.co/docs/diffusers/main/en/training/lora) was used to finetune an adapter, which was leter merged with the base unquantized model.
|
7 |
+
|
8 |
datasets:
|
9 |
+
- University information (collected from website, https://kuet.ac.bd/)
|
10 |
|
11 |
+
Below are the training configurations for the fine-tuning process:
|
12 |
```
|
13 |
LoraConfig:
|
14 |
r=16,
|