Think-and-Code-React

This is a fine-tuned Qwen model which is designed to provide frontend development solutions with enhanced reasoning capabilities on ReactJS. It can writes code after reasoning task then provide some best prectice after ansering.

Table of Contents

  1. Problem Statement
  2. Solution
  3. How It Works
  4. How to Use This Model
  5. Future Developments
  6. License
  7. Model Card Contact

Problem Statement

Coading is a challenging task for small models. small models as not enough capable for writing code with heigh accuracy and reasoning whare React is widely used javascript liberary and most time we found that small LLM are not very specific to programming:

Solution

Training LLM for React specific dataset and enable reasoning task. This LLM provides us cold start with React based LLM whare it does understand many react concept.:

  1. Understands user's query
  2. Evaluate Everything in <think> tag
  3. Provide answer in <answer>
  4. Additionally provide Best Prectices in <verifier_answer>

How It Works

  1. Data Collection: The model is trained on 1000's of react specific senerios. it does provide us cold start with good reasoning capabilities

  2. Feature Extraction: Upscalling it using RL to enable model with heigh level of accuracy and better output for reasoning.

  3. Machine Learning: A sophisticated machine learning algorithm is employed to learn the heigh quality code in React Specific code and can be expand to all freamwork.

How to Use This Model

Prerequisites

  • Python 3.7 or higher
  • Required libraries (install via pip):
    pip install torch transformers
    

Installation

  1. Clone this repository:
    git clone https://huggingface.co/foduucom/Think-and-Code-React
    cd Think-and-Code-React
    

Usage

  1. Import the necessary libraries:
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
  1. Setting up models:
model_path = "./Path-to-llm-folder"

tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)

device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
  1. Setting up AI Response:
def generate_text(prompt, max_length=2000):
    inputs = tokenizer(prompt, return_tensors="pt").to(device)  
    output = model.generate(
        **inputs, 
        do_sample=True, 
        temperature=0.7
    )
    return tokenizer.decode(output[0], skip_special_tokens=True)
  1. Using LLM:
prompt = "Write a code in react for calling api to server at https://example.com/test"
generated_text = generate_text(prompt)

print(generated_text)

Future Developments

This is a cold start LLM and can be enhance it's capabalities using RL so that this LLM perform more better. Currently we have found

Model Card Contact

For inquiries and contributions, please contact us at [email protected].

@ModelCard{
    author       = {Nehul Agrawal, Priyal Mehta and Ayush Panday},
    title         = {Think and Code in React},
    year           = {2025}
}
Downloads last month
0
Safetensors
Model size
494M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.