--- license: mit base_model: gpt2 tags: - generated_from_trainer - code model-index: - name: codeparrot-ds results: [] datasets: - huggingface-course/codeparrot-ds-valid language: - en metrics: - accuracy library_name: transformers pipeline_tag: text-generation --- # GPT2-Codeparrot Generative Pre-trained Transformer 2 (GPT-2) is a large language model from OpenAI that was first introduced in [gpt2](https://huggingface.co/gpt2). It is a decoder-only Transformer model trained using a masked language modeling (MLM) objective. This means the model is trained to predict the next word in a sequence, given the previous words. GPT-2 models are known for their ability to generate realistic and coherent text, making them useful for a variety of natural language processing tasks such as text generation, translation, and question answering. ## Model description This model is a base GPT-2 architecture with [insert number] parameters. It was trained on the huggingface-course/codeparrot-ds-valid dataset, which is a small subset of the original WebText dataset used to train GPT-2. Due to the limited training data, this model may not perform as well as other pre-trained GPT-2 models available on Hugging Face. ## Intended uses & limitations This model is intended for personal learning and exploration of the GPT-2 architecture. Due to its limited training data, it may not be suitable for real-world applications. ## Training and evaluation data This model was trained using the Transformers library with the following specifications: - Training Data: `huggingface-course/codeparrot-ds-valid` - Training Script: [Training_a_causal_language_model_from_scratch](https://github.com/kailas711/HugginFace-NLP-Course/blob/af464abed3f79fe7434f3310ceb97bfb68cddcef/Training_a_causal_language_model_from_scratch.ipynb) - ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.42.4 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1