Datasets:
Modalities:
Text
Formats:
parquet
Languages:
Thai
Size:
10K - 100K
Tags:
instruction-finetuning
License:
license: cc-by-sa-3.0 | |
task_categories: | |
- question-answering | |
- summarization | |
language: | |
- th | |
tags: | |
- instruction-finetuning | |
size_categories: | |
- 10K<n<100K | |
# Summary | |
🇹🇭 Thai-instructed dataset translated from [gbharti/wealth-alpaca_lora](https://huggingface.co/datasets/gbharti/wealth-alpaca_lora) using Google Cloud Translation. | |
This dataset is a combination of Stanford's Alpaca (https://github.com/tatsu-lab/stanford_alpaca) and FiQA (https://sites.google.com/view/fiqa/) with another 1.3k pairs custom generated using GPT3.5 | |
Script for tuning through Kaggle's (https://www.kaggle.com) free resources using PEFT/LoRa: https://www.kaggle.com/code/gbhacker23/wealth-alpaca-lora | |
Supported Tasks: | |
- Training LLMs | |
- Synthetic Data Generation | |
- Data Augmentation | |
Languages: Thai | |
Version: 1.0 | |
--- | |