euclaise's picture
Add dataset tags/links
eb29b56 verified
|
raw
history blame
1.26 kB
metadata
language:
  - en
license: apache-2.0
tags:
  - Mixtral
  - instruct
  - finetune
  - chatml
  - DPO
  - RLHF
  - gpt4
  - synthetic data
  - distillation
  - mlx
  - mlx
base_model: mistralai/Mixtral-8x7B-v0.1
model-index:
  - name: Nous-Hermes-2-Mixtral-8x7B-DPO
    results: []
datasets:
  - euclaise/reddit-instruct-curated
  - teknium/OpenHermes-2.5

Alt text

mlx-community/NousHermes-Mixtral-8x7B-Reddit-mlx

This model was converted to MLX format from mlx-community/Nous-Hermes-2-Mixtral-8x7B-DPO-4bit and finetuned on a dataset of 7k selected Reddit threads. Refer to the original model card for more details on the model.

Use with mlx

When using the model use the fromat:

Question: [your question]

Assistant:

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/NousHermes-Mixtral-8x7B-Reddit-mlx")
response = generate(model, tokenizer, prompt="Question:ELI5 quantum mechanics. Assistant:", verbose=True)