File size: 1,790 Bytes
98be65c 1594091 98be65c 1594091 98be65c 4d841ad c4cc2d2 98be65c 1594091 98be65c 1594091 98be65c 1594091 acb2b66 1594091 98be65c 1594091 98be65c 1594091 98be65c 1594091 98be65c 1594091 98be65c 1594091 98be65c 1594091 98be65c 1594091 98be65c 1594091 98be65c 1594091 98be65c 1594091 98be65c 1594091 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
---
library_name: transformers
tags:
- code
- NextJS
language:
- en
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
base_model_relation: finetune
pipeline_tag: text-generation
---
# Model Information
The Qwen2.5-1.5B-NextJs-code is a quantized, fine-tuned version of the Qwen2.5-1.5B-Instruct model designed specifically for generating NextJs code.
- **Base model:** Qwen/Qwen2.5-1.5B-Instruct
# How to use
Starting with transformers version 4.44.0 and later, you can run conversational inference using the Transformers pipeline.
Make sure to update your transformers installation via pip install --upgrade transformers.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
```
```python
def get_pipline():
model_name = "nirusanan/Qwen2.5-1.5B-NextJs-code"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="cuda:0",
trust_remote_code=True
)
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=3500)
return pipe
pipe = get_pipline()
```
```python
def generate_prompt(project_title, description):
prompt = f"""Below is an instruction that describes a project. Write Nextjs 14 code to accomplish the project described below.
### Instruction:
Project:
{project_title}
Project Description:
{description}
### Response:
"""
return prompt
```
```python
prompt = generate_prompt(project_title = "Your NextJs project", description = "Your NextJs project description")
result = pipe(prompt)
generated_text = result[0]['generated_text']
print(generated_text.split("### End")[0])
``` |