|
--- |
|
license: cc-by-nc-sa-4.0 |
|
language: |
|
- en |
|
datasets: |
|
- garage-bAInd/Open-Platypus |
|
--- |
|
Some GGUF v2 quantizations of the model [RobbeD/OpenLlama-Platypus-3B](https://huggingface.co/RobbeD/OpenLlama-Platypus-3B) |
|
|
|
# OpenLlama-Platypus-3B |
|
|
|
OpenLlama-Platypus-3B is an instruction fine-tuned model based on the OpenLLaMA-3B transformer architecture. |
|
|
|
### Model Details |
|
|
|
* **Trained by**: Robbe De Sutter |
|
* **Model type:** **OpenLlama-Platypus-3B** is an auto-regressive language model based on the OpenLLaMA-3B transformer architecture. |
|
* **Language(s)**: English |
|
* **License for base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/)) |
|
|
|
### Prompt Template |
|
``` |
|
### Instruction: |
|
<prompt> (without the <>) |
|
### Response: |
|
``` |
|
|
|
### Training Dataset |
|
|
|
`RobbeD/OpenLlama-Platypus-3B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). |
|
|
|
Please see their [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information. |
|
|
|
### Training Procedure |
|
|
|
`RobbeD/OpenLlama-Platypus-3B` was instruction fine-tuned using LoRA on 1 RX 6900 XT 16GB. |