Arabic Poetry Fine-Tuned Model
This model is a fine-tuned version of the GPT-2 model, specifically trained on Arabic poetry. It is designed to generate Arabic poetry and can be used for creative writing, educational purposes, or research in natural language processing.
Try the Model
You can try the model directly in this interactive demo:
Model Details
- Model Type: GPT-2
- Language: Arabic
- License: Apache-2.0
- Author: NightPrince
Setup
You can run the smashed model with these steps:
- Check requirements from the original repo NightPrince/Arabic-Poetry-FineTuned installed. In particular, check python, cuda, and transformers versions.
- Make sure that you have installed quantization related packages.
pip install transformers accelerate bitsandbytes>0.37.0
- Load & run the model.
from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("PrunaAI/NightPrince-Arabic-Poetry-FineTuned-bnb-8bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("NightPrince/Arabic-Poetry-FineTuned") input_ids = tokenizer("ุญุฏุซูู ุนู ุงูุญุจ ูู ุฒู ู ุงูุญุฑุจ,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0])
Configurations
The configuration info are in smash_config.json
.
Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model NightPrince/Arabic-Poetry-FineTuned before using this model which provided the base model. The license of the pruna-engine
is here on Pypi.
Intended Use
This model is intended for generating Arabic poetry. It can be used in applications such as:
- Creative writing tools
- Educational resources for learning Arabic poetry
- Research in natural language processing and generation
Training Data
The model was fine-tuned on a dataset of Arabic poetry. The dataset includes works from various poets and covers a range of styles and themes.
Training Procedure
- Framework: PyTorch
- Hardware: Trained on a GPU
- Epochs: 5
- Batch Size: [8]
- Learning Rate: [private]
Evaluation
The model was evaluated based on its ability to generate coherent and stylistically appropriate poetry. The training loss achieved was approximately 2.67, indicating a good level of learning.
Limitations and Biases
As with any language model, this model may generate biased or inappropriate content. Users should be aware of these limitations and use the model responsibly.
Acknowledgements
This model was developed by NightPrince and is hosted on Hugging Face. Special thanks to the creators of the original GPT-2 model and the Hugging Face team for their support.
Contact
For questions or feedback, please contact NightPrince via Hugging Face.
- Downloads last month
- 136