Update README.md
Browse files
README.md
CHANGED
@@ -12,11 +12,11 @@ pipeline_tag: text-generation
|
|
12 |
|
13 |
## Model Description
|
14 |
|
15 |
-
`
|
16 |
|
17 |
## Usage
|
18 |
|
19 |
-
Start chatting with `
|
20 |
|
21 |
```python
|
22 |
import torch
|
@@ -43,21 +43,21 @@ This is a system prompt, please behave and help the user.
|
|
43 |
Your prompt here
|
44 |
|
45 |
### Assistant
|
46 |
-
The output of
|
47 |
```
|
48 |
|
49 |
## Model Details
|
50 |
|
51 |
* **Developed by**: [Stability AI](https://stability.ai/)
|
52 |
-
* **Model type**:
|
53 |
* **Language(s)**: English
|
54 |
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
|
55 |
-
* **License**: Fine-tuned checkpoints (`
|
56 |
* **Contact**: For questions and comments about the model, please email `[email protected]`
|
57 |
|
58 |
### Training Dataset
|
59 |
|
60 |
-
`
|
61 |
|
62 |
### Training Procedure
|
63 |
|
|
|
12 |
|
13 |
## Model Description
|
14 |
|
15 |
+
`Stable Beluga 2` is a Llama2 70B model finetuned on an Orca style Dataset
|
16 |
|
17 |
## Usage
|
18 |
|
19 |
+
Start chatting with `Stable Beluga 2` using the following code snippet:
|
20 |
|
21 |
```python
|
22 |
import torch
|
|
|
43 |
Your prompt here
|
44 |
|
45 |
### Assistant
|
46 |
+
The output of Stable Beluga 2
|
47 |
```
|
48 |
|
49 |
## Model Details
|
50 |
|
51 |
* **Developed by**: [Stability AI](https://stability.ai/)
|
52 |
+
* **Model type**: Stable Beluga 2 is an auto-regressive language model fine-tuned on Llama2 70B.
|
53 |
* **Language(s)**: English
|
54 |
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
|
55 |
+
* **License**: Fine-tuned checkpoints (`Stable Beluga 2`) is licensed under the [STABLE BELUGA NON-COMMERCIAL COMMUNITY LICENSE AGREEMENT](https://huggingface.co/stabilityai/StableBeluga2/LICENSE.TXT)
|
56 |
* **Contact**: For questions and comments about the model, please email `[email protected]`
|
57 |
|
58 |
### Training Dataset
|
59 |
|
60 |
+
` Stable Beluga 2` is trained on our internal Orca-style dataset
|
61 |
|
62 |
### Training Procedure
|
63 |
|