Miguelpef/bart-base-lora-3DPrompt

The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.

Setting Up

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from peft import PeftModel, PeftConfig

# Define the repository ID
repo_id = "Miguelpef/bart-base-lora-3DPrompt"

# Load the PEFT configuration from the Hub
peft_config = PeftConfig.from_pretrained(repo_id)

# Load the base model from the Hub
model = AutoModelForSeq2SeqLM.from_pretrained(peft_config.base_model_name_or_path)

# Load the tokenizer from the Hub
tokenizer = AutoTokenizer.from_pretrained(repo_id)

# Wrap the base model with PEFT
model = PeftModel.from_pretrained(model, repo_id)

# Now you can use the model for inference as before
def generar_prompt_desde_objeto(objeto):
    prompt = objeto
    inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
    outputs = model.generate(**inputs, max_length=100)
    prompt_generado = tokenizer.decode(outputs[0], skip_special_tokens=True)
    return prompt_generado

mi_objeto = "Mesa grande marrón" #Change this object
prompt_generado = generar_prompt_desde_objeto(mi_objeto)
print({prompt_generado})
Downloads last month
2
Safetensors
Model size
139M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Miguelpef/bart-base-lora-3DPrompt

Base model

facebook/bart-base
Finetuned
(375)
this model

Dataset used to train Miguelpef/bart-base-lora-3DPrompt

Space using Miguelpef/bart-base-lora-3DPrompt 1