|
--- |
|
datasets: |
|
- anon8231489123/ShareGPT_Vicuna_unfiltered |
|
- ehartford/wizard_vicuna_70k_unfiltered |
|
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered |
|
- QingyiSi/Alpaca-CoT |
|
- teknium/GPT4-LLM-Cleaned |
|
- teknium/GPTeacher-General-Instruct |
|
- metaeval/ScienceQA_text_only |
|
- hellaswag |
|
- tasksource/mmlu |
|
- openai/summarize_from_feedback |
|
language: |
|
- en |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# Manticore 13B - Preview Release (previously Wizard Mega) |
|
|
|
Manticore 13B is a Llama 13B model fine-tuned on the following datasets: |
|
- [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) - based on a cleaned and de-suped subset |
|
- [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered) |
|
- [Wizard-Vicuna](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered) |
|
- [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT) |
|
- [GPT4-LLM-Cleaned](https://huggingface.co/datasets/teknium/GPT4-LLM-Cleaned) |
|
- [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct) |
|
- ARC-Easy & ARC-Challenge - instruct augmented for detailed responses |
|
- mmlu: instruct augmented for detailed responses subset including |
|
- abstract_algebra |
|
- conceptual_physics |
|
- formal_logic |
|
- high_school_physics |
|
- logical_fallacies |
|
- [hellaswag](https://huggingface.co/datasets/hellaswag) - 5K row subset of instruct augmented for concise responses |
|
- [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses |
|
- [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization |
|
|
|
|
|
# Demo |
|
|
|
Try out the model in HF Spaces. The demo uses a quantized GGML version of the model to quickly return predictions on smaller GPUs (and even CPUs). Quantized GGML may have some minimal loss of model quality. |
|
- https://huggingface.co/spaces/openaccess-ai-collective/manticore-ggml |
|
|
|
## Release Notes |
|
|
|
- https://wandb.ai/wing-lian/manticore-13b/runs/nq3u3uoh/workspace |
|
|
|
## Build |
|
|
|
Manticore was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8xA100 80GB |
|
- Preview Release: 1 epoch taking 8 hours. |
|
- The configuration to duplicate this build is provided in this repo's [/config folder](https://huggingface.co/openaccess-ai-collective/manticore-13b/tree/main/configs). |
|
|
|
## Bias, Risks, and Limitations |
|
Manticore has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). |
|
Manticore was fine-tuned from the base model LlaMa 13B, please refer to its model card's Limitations Section for relevant information. |
|
|
|
## Examples |
|
|
|
```` |
|
### Instruction: write Python code that returns the first n numbers of the Fibonacci sequence using memoization. |
|
|
|
### Assistant: |
|
```` |
|
|
|
``` |
|
### Instruction: Finish the joke, a mechanic and a car salesman walk into a bar... |
|
|
|
### Assistant: |
|
``` |
|
|
|
|