Text Generation
Transformers
PyTorch
code
gpt2
custom_code
Eval Results
text-generation-inference
Inference Endpoints
harmdevries commited on
Commit
718da24
1 Parent(s): bdeb6cc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -201,7 +201,7 @@ model-index:
201
  # Model Summary
202
 
203
  The SantaCoder models are a series of 1.1B parameter models trained on the Python, Java, and JavaScript subset of [The Stack (v1.1)](https://huggingface.co/datasets/bigcode/the-stack) (which excluded opt-out requests).
204
- The main model uses multi-query attention, was trained using near-deduplication and comment-to-code ratio as filtering criteria and using the Fill-in-the-Middle objective.
205
  In addition there are several models that were trained on datasets with different filter parameters and with architecture and objective variations.
206
 
207
  - **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
@@ -221,7 +221,7 @@ In addition there are several models that were trained on datasets with differen
221
  |`dedup-alt`| MQA | AR + FIM | Stronger near-deduplication |
222
  |`final`| MQA | AR + FIM | Stronger near-deduplication and comment-to-code ratio |
223
 
224
- The `final` model is the best performing model and was trained twice as long as the others. This checkpoint is the default model and available on the `main` branch. All other checkpoints are on separate branches with according names.
225
 
226
  # Use
227
 
 
201
  # Model Summary
202
 
203
  The SantaCoder models are a series of 1.1B parameter models trained on the Python, Java, and JavaScript subset of [The Stack (v1.1)](https://huggingface.co/datasets/bigcode/the-stack) (which excluded opt-out requests).
204
+ The main model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150), was trained using near-deduplication and comment-to-code ratio as filtering criteria and using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255).
205
  In addition there are several models that were trained on datasets with different filter parameters and with architecture and objective variations.
206
 
207
  - **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
 
221
  |`dedup-alt`| MQA | AR + FIM | Stronger near-deduplication |
222
  |`final`| MQA | AR + FIM | Stronger near-deduplication and comment-to-code ratio |
223
 
224
+ The `final` model is the best performing model and was trained twice as long (236B tokens) as the others. This checkpoint is the default model and available on the `main` branch. All other checkpoints are on separate branches with according names.
225
 
226
  # Use
227