Deema's picture
Update README.md
9f17f36
---
library_name: peft
base_model: FreedomIntelligence/AceGPT-7B
language:
- ar
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This repo contains a low-rank adapter for AceGPT-7B fit on the arbml/alpagasus_cleaned_ar.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[arbml/alpagasus_cleaned_ar](https://huggingface.co/datasets/arbml/alpagasus_cleaned_ar)
#### Training Hyperparameters
```
python finetune.py --base_model 'FreedomIntelligence/AceGPT-7B' --data_path 'alpagasus_cleaned_ar.json' --output_dir 'lora-alpaca_alpagasus'
Training Alpaca-LoRA model with params:
base_model: FreedomIntelligence/AceGPT-7B
data_path: alpagasus_cleaned_ar.json
output_dir: lora-alpaca_alpagasus
batch_size: 128
micro_batch_size: 4
num_epochs: 3
learning_rate: 0.0003
cutoff_len: 256
val_set_size: 2000
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules: ['q_proj', 'v_proj']
train_on_inputs: True
add_eos_token: False
group_by_length: False
wandb_project:
wandb_run_name:
wandb_watch:
wandb_log_model:
resume_from_checkpoint: False
prompt template: alpaca
```
### Framework versions
- PEFT 0.7.2.dev0