Meta-Llama-3.1-8B-Instruct-Apple-MLX

Overview

This model is a merge of the MLX QLORA Adapter and the base model Meta LLaMa 3.1 8B Instruct model, trained to answer questions and provide guidance on Apple's latest machine learning framework, MLX. The fine-tuning was done using the LORA (Low-Rank Adaptation) method on a custom dataset of question-answer pairs derived from the MLX documentation.

Dataset

Fine-tuned on a single epoch of Apple MLX QA.

Installation

To use the model, you need to install the required dependencies:

pip install peft transformers jinja2==3.1.0

Usage

Here’s a sample code snippet to load and interact with the model:

import transformers
import torch

model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Who are you?"},
]

outputs = pipeline(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
Downloads last month
11
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for koyeb/Meta-Llama-3.1-8B-Instruct-Apple-MLX

Finetuned
(830)
this model

Dataset used to train koyeb/Meta-Llama-3.1-8B-Instruct-Apple-MLX