--- language: ro tags: - romanian - llama - lora - finetuning license: mit base_model: - meta-llama/Llama-3.1-8B-Instruct --- # EminescuAI ## Overview EminescuAI is a specialized language model based on meta-llama/Llama-3.1-8B, fine-tuned using LoRA (Low-Rank Adaptation) technology. The model was trained on the public literary works of Mihai Eminescu, one of Romania's most influential poets and writers. Due to its relatively compact size of 8B parameters, the model has limited reasoning capabilities. While it excels at creative and stylistic tasks that mirror Eminescu's writing style, it may struggle with complex logical reasoning, long input prompts, or sophisticated dialogue. ## Technical Details - **Base Model:** meta-llama/Llama-3.1-8B - **Fine-tuning Method:** LoRA (Low-Rank Adaptation) - **Training Data:** Public works by Mihai Eminescu - **Primary Language:** Romanian ## Capabilities The model excels at generating text in Romanian that reflects Eminescu's literary style It performs better on descriptive writing tasks, particularly: - Nature-themed poetry - Seasonal descriptions - Romantic and contemplative prose - Understanding and responding to Romanian language prompts Ideal for: - Creative writing in Romanian - Generating descriptive text inspired by Romanian romantic literature ## Limitations For optimal results: - Use Romanian language prompts - Focus on descriptive and creative writing tasks - Keep in mind the model works best with themes common in Romanian romantic literature ## Technical Note This model represents an application of modern AI technology to classical Romanian literature, demonstrating how historical literary styles can be preserved and studied using machine learning techniques. ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel # Încarcă modelul și tokenizer-ul model = AutoModelForCausalLM.from_pretrained("adrianpintilie/EminescuAI") tokenizer = AutoTokenizer.from_pretrained("adrianpintilie/EminescuAI") # Generează text text = "Scrie o poezie despre:" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_length=200) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```