Model Card

Model Description

Mistral 7B fine-tuned by the OpenHermes 2.5 dataset optimised for multi-turn conversation and character impersonation.

The dataset has been pre-processed by doing the following:

  1. remove all refusals
  2. remove any mention of AI assistant
  3. split any multi-turn dialog generated in the dataset into multi-turn conversations records
  4. added nfsw generated conversations from the Teatime dataset
  • Developed by: l3utterfly
  • Funded by: Layla Network
  • Model type: Mistral
  • Language(s) (NLP): English
  • License: Apache-2.0
  • Finetuned from model: Mistral 7B

Uses

Base model used by Layla - the offline personal assistant: https://www.layla-network.ai

Help & support: https://discord.gg/x546YJ6nYC

Prompt:

USER:
ASSISTANT:

Built with Axolotl

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 64.69
AI2 Reasoning Challenge (25-Shot) 62.29
HellaSwag (10-Shot) 83.36
MMLU (5-Shot) 64.32
TruthfulQA (0-shot) 43.14
Winogrande (5-shot) 79.56
GSM8k (5-shot) 55.50
Downloads last month
73
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for l3utterfly/mistral-7b-v0.1-layla-v4

Merges
8 models
Quantizations
4 models

Spaces using l3utterfly/mistral-7b-v0.1-layla-v4 6

Evaluation results