Edit model card

Model Card for GPT_2_CODE

-Goal is to create a small GPT2 python coder

Table of Contents

Model Details

Model Description

WIP,Goal is to create a small GPT2 python coder

  • Developed by: C, o, d, e, M, o, n, k, e, y
  • Shared by [Optional]: More information needed
  • Model type: Language model
  • Language(s) (NLP): eng
  • License: wtfpl
  • Parent Model: More information needed
  • Resources for more information: More information needed

Uses

coding assistant

Direct Use

generate python code snippets

Downstream Use [Optional]

semi auto coder

Out-of-Scope Use

describe code Keep Finetuning on question/python datasets

Training Details

Training Data

flytech/python-codes-25k espejelomar/code_search_net_python_10000_examples

Training Procedure

Train/Val/Scheduler

Preprocessing

More information needed

Speeds, Sizes, Times

Epochs 3

"flytech/python-codes-25k"

Training Loss: 0.4007 Validation Loss: 0.5526 Epochs 3

"espejelomar/code_search_net_python_10000_examples"

--Starting Loss: 2.0862 -Epoch 1/4 | Training Loss: 1.5355 | Validation Loss: 1.1723 -Epoch 2/4 | Training Loss: 1.0501 | Validation Loss: 1.0702 -Epoch 3/4 | Training Loss: 0.9804 | Validation Loss: 1.0798 -Epoch 4/4 | Training Loss: 0.9073 | Validation Loss: 1.0772

Evaluation

Manual comparison with base model

Testing Data

flytech/python-codes-25k espejelomar/code_search_net_python_10000_examples

Factors

80/20 train/val

Metrics

train/validate lr scheduling

Results

Better in python code generation as base gpt2-medium model

Model Examination

More information needed

Environmental Impact

  • Hardware Type: CPU and Colab T4
  • Hours used: 4
  • Cloud Provider: Google Colab
  • Compute Region: NL

Model Architecture and Objective

gpt2

Compute Infrastructure

More information needed

Hardware

CPU and Colab T4

Software

pytorch, custom python

More Information [optional]

Experimental

Model Card Authors [optional]

CodeMonkeyXL

Model Card Contact

K00B404 huggingface

How to Get Started with the Model

Use the code below to get started with the model.

Downloads last month
32
Safetensors
Model size
355M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train K00B404/GPT_2_CODE