Update README.md
Browse files
README.md
CHANGED
@@ -2,30 +2,29 @@
|
|
2 |
tags:
|
3 |
- generated_from_trainer
|
4 |
model-index:
|
5 |
-
- name:
|
6 |
results: []
|
|
|
7 |
---
|
8 |
|
9 |
-
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
10 |
-
should probably proofread and complete it, then remove this comment. -->
|
11 |
|
12 |
-
|
13 |
|
14 |
-
|
15 |
-
It achieves the following results on the evaluation set:
|
16 |
-
- Loss: 1.4720
|
17 |
|
18 |
-
|
|
|
|
|
|
|
19 |
|
20 |
-
More information needed
|
21 |
|
22 |
## Intended uses & limitations
|
23 |
|
24 |
-
|
25 |
|
26 |
## Training and evaluation data
|
27 |
|
28 |
-
|
29 |
|
30 |
## Training procedure
|
31 |
|
@@ -63,4 +62,4 @@ The following hyperparameters were used during training:
|
|
63 |
- Transformers 4.28.1
|
64 |
- Pytorch 2.0.1+cu118
|
65 |
- Datasets 2.12.0
|
66 |
-
- Tokenizers 0.13.3
|
|
|
2 |
tags:
|
3 |
- generated_from_trainer
|
4 |
model-index:
|
5 |
+
- name: starchat-beta
|
6 |
results: []
|
7 |
+
license: bigcode-openrail-m
|
8 |
---
|
9 |
|
|
|
|
|
10 |
|
11 |
+
<img src="https://huggingface.co/spaces/HuggingFaceH4/starchat-playground/resolve/main/thumbnail.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
12 |
|
13 |
+
# Model Card for StarChat Beta
|
|
|
|
|
14 |
|
15 |
+
StarChat is a series of language models that are trained to act as helpful coding assistants. StarChat Beta is the second model in the series, and is a fine-tuned version of [StarCoderPlus](https://huggingface.co/bigcode/starcoderplus) that was trained on an ["uncensored"](https://erichartford.com/uncensored-models) variant of the [`openassistant-guanaco` dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). We found that removing the in-built alignment of the OpenAssistant dataset boosted performance on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) and made the model more helpful at coding tasks. However, this means that model is likely to generate problematic text when prompted to do so and should only be used for educational and research purposes.
|
16 |
+
|
17 |
+
- **Repository:** [bigcode-project/starcoder](https://github.com/bigcode-project/starcoder)
|
18 |
+
- **Languages:** 35+ Natural languages & 80+ Programming languages
|
19 |
|
|
|
20 |
|
21 |
## Intended uses & limitations
|
22 |
|
23 |
+
The model was fine-tuned on a variant of the [`OpenAssistant/oasst1`](https://huggingface.co/datasets/OpenAssistant/oasst1) dataset, which contains a diverse range of dialogues in over 35 languages. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground) to test its coding capabilities.
|
24 |
|
25 |
## Training and evaluation data
|
26 |
|
27 |
+
StarChat Beta is trained on an ["uncensored"](https://erichartford.com/uncensored-models) variant of the [`openassistant-guanaco` dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). We applied the same [recipe](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered/blob/main/wizardlm_clean.py) used to filter the ShareGPT datasets behind the [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered).
|
28 |
|
29 |
## Training procedure
|
30 |
|
|
|
62 |
- Transformers 4.28.1
|
63 |
- Pytorch 2.0.1+cu118
|
64 |
- Datasets 2.12.0
|
65 |
+
- Tokenizers 0.13.3
|