Edit model card

GPTQ

2048 sequence length

VMware/open-instruct dataset

Training

axolotl was used for training on a 4x nvidia a100 gpu cluster.

the a100 GPU cluster has been graciously provided by lloorree.

trained on koishi commit 6e675d1 for one epoch

Base Model

rank 16 qlora tune of mistralai/Mixtral-8x7B-v0.1 (all modules, merged)

Prompting

The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: <|system|>, <|user|> and <|model|>.

The <|system|> prompt can be used to inject out-of-channel information behind the scenes, while the <|user|> prompt should be used to indicate user input. The <|model|> token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.

Downloads last month
12
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train ewof/koishi-8x7b-qlora-gptq

Collection including ewof/koishi-8x7b-qlora-gptq