Edit model card

Model Card for Model ID

Cesterrewards is a Bert model that is able to predict the code coverage of Libcester unit test cases.

Model Details

Model Description

  • Developed by: Xavier Woon

    Recommendations

    Expanding the dataset will help increase the accuracy and robustness of the model, and improve code coverage predictions based on real life scenarios.

    How to Get Started with the Model

    Use the code below to get started with the model.

    from transformers import AutoModelForSequenceClassification, AutoTokenizer
    import torch
    
    reward_name = "xavierwoon/cesterrewards"
    reward_model = AutoModelForSequenceClassification.from_pretrained(reward_name)
    tokenizer = AutoTokenizer.from_pretrained(reward_name)
    
    # Change the prompt to sample unit test cases in Libcester format
    prompt = """
    CESTER_TEST(create_stack, test_instance,
    {
        struct Stack stack;
        initStack(&stack);
        cester_assert_equal(stack.top, -1);
    })
    """
    
    inputs = tokenizer(prompt, return_tensors="pt", padding=True, truncation=True, max_length=512)
    # Put the model in evaluation mode
    reward_model.eval()
    
    # Perform inference to get the reward score
    with torch.no_grad():
        outputs = reward_model(**inputs)
    reward_score = outputs.logits.item()  # Extract the scalar value
    
    print("Expected Code Coverage:", reward_score)
    

    Training Details

    Training Data

    Training Data was created based on Data Structures and Algorithm (DSA) codes created using ChatGPT. It would also create corresponding Cester test cases. After testing the code coverage, it was added to the dataset under score.

    Training Procedure

    1. Prompt GPT for sample DSA C code
    2. Prompt GPT for Libcester unit test cases with 100% code coverage
    3. Test generated test cases for code coverage and note down
Downloads last month
65
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for xavierwoon/cesterrewards

Finetuned
(2064)
this model

Datasets used to train xavierwoon/cesterrewards