metadata
license_name: apache-2.0
language:
- en
base_model: louisbrulenaudet/Pearl-7B-0211-ties
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
library_name: transformers
tags:
- mlx
- merge
- mergekit
- louisbrulenaudet/Pearl-7B-slerp
- WizardLM/WizardMath-7B-V1.1
- cognitivecomputations/WestLake-7B-v2-laser
- CultriX/NeuralTrix-7B-dpo
- chemistry
- biology
- math
pipeline_tag: text-generation
model-index:
- name: Pearl-7B-0211-ties
results:
- task:
type: text-generation
metrics:
- name: Average
type: Average
value: 75.11
- name: ARC
type: ARC
value: 71.42
- name: GSM8K
type: GSM8K
value: 70.66
- name: Winogrande
type: Winogrande
value: 84.37
- name: TruthfulQA
type: TruthfulQA
value: 71.46
- name: HellaSwag
type: HellaSwag
value: 88.86
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
mlx-community/Pearl-7B
This model was converted to MLX format from louisbrulenaudet/Pearl-7B-0211-ties
using mlx-vlm version 0.15.2.
Refer to the original model card for more details on the model.
Use with mlx
pip install -U mlx-vlm
python -m mlx_vlm.generate --model mlx-community/Pearl-7B --max-tokens 100 --temp 0.0
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Pearl-7B")
response = generate(model, tokenizer, prompt="hello", verbose=True)
Citing & Authors
If you use this code in your research, please use the following BibTeX entry.
@misc{louisbrulenaudet2024,
author = {Louis Brulé Naudet},
title = {Pearl-7B-0211-ties, an xtraordinary 7B model},
year = {2024}
howpublished = {\url{https://huggingface.co/louisbrulenaudet/Pearl-7B-0211-ties}},
}
Feedback
If you have any feedback, please reach out at [email protected].