skirano's picture
Update README.md
5ede85f verified
|
raw
history blame
1.29 kB
metadata
language:
  - en
license: apache-2.0
tags:
  - Mixtral
  - instruct
  - finetune
  - chatml
  - DPO
  - RLHF
  - gpt4
  - synthetic data
  - distillation
  - mlx
  - mlx
base_model: mistralai/Mixtral-8x7B-v0.1
model-index:
  - name: Nous-Hermes-2-Mixtral-8x7B-DPO
    results: []

Alt text

mlx-community/NousHermes-Mixtral-8x7B-Reddit-mlx

This model was converted to MLX format from mlx-community/Nous-Hermes-2-Mixtral-8x7B-DPO-4bit and finetuned on a dataset of 7k selected Reddit threads. Refer to the original model card for more details on the model.

For the Dataset original dataset

Use with mlx

When using the model use the fromat:

Question: [your question]

Assistant:

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/NousHermes-Mixtral-8x7B-Reddit-mlx")
response = generate(model, tokenizer, prompt="Question:ELI5 quantum mechanics. Assistant:", verbose=True)