base_model: google/gemma-2b | |
datasets: | |
- tatsu-lab/alpaca | |
language: en | |
tags: | |
- torchtune | |
# My Torchtune Model | |
This model is a finetuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) | |
# Model description | |
More information needed | |
# Training and evaluation results | |
More information needed | |
# Training procedure | |
This model was trained using the [torchtune](https://github.com/pytorch/torchtune) library using the following command: | |
```bash | |
/Users/salmanmohammadi/projects/torchtune/recipes/lora_finetune_single_device.py --config /Users/salmanmohammadi/projects/torchtune/recipes/configs/gemma/2B_lora_single_device.yaml \ | |
device=mps \ | |
epochs=1 \ | |
max_steps_per_epoch=10 | |
``` | |
# Framework versions | |
- torchtune 0.0.0 | |
- torchao 0.5.0 | |
- datasets 2.20.0 | |
- sentencepiece 0.2.0 | |