|
--- |
|
license: mit |
|
datasets: |
|
- Miguelpef/3d-prompt |
|
language: |
|
- es |
|
base_model: |
|
- facebook/bart-base |
|
new_version: Miguelpef/bart-base-lora-3DPrompt |
|
pipeline_tag: text-generation |
|
library_name: transformers |
|
tags: |
|
- 3d |
|
- prompt |
|
- español |
|
--- |
|
|
|
![Miguelpef/bart-base-lora-3DPrompt](images/ModeloLora.jpg) |
|
|
|
Spanish version |
|
|
|
**The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.** |
|
|
|
## Setting Up |
|
```python |
|
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM |
|
from peft import PeftModel, PeftConfig |
|
|
|
# Define the repository ID |
|
repo_id = "Miguelpef/bart-base-lora-3DPrompt" |
|
|
|
# Load the PEFT configuration from the Hub |
|
peft_config = PeftConfig.from_pretrained(repo_id) |
|
|
|
# Load the base model from the Hub |
|
model = AutoModelForSeq2SeqLM.from_pretrained(peft_config.base_model_name_or_path) |
|
|
|
# Load the tokenizer from the Hub |
|
tokenizer = AutoTokenizer.from_pretrained(repo_id) |
|
|
|
# Wrap the base model with PEFT |
|
model = PeftModel.from_pretrained(model, repo_id) |
|
|
|
# Now you can use the model for inference as before |
|
def generar_prompt_desde_objeto(objeto): |
|
prompt = objeto |
|
inputs = tokenizer(prompt, return_tensors='pt').to(model.device) |
|
outputs = model.generate(**inputs, max_length=100) |
|
prompt_generado = tokenizer.decode(outputs[0], skip_special_tokens=True) |
|
return prompt_generado |
|
|
|
mi_objeto = "Mesa grande marrón" #Change this object |
|
prompt_generado = generar_prompt_desde_objeto(mi_objeto) |
|
print({prompt_generado}) |
|
``` |
|
|