metadata
pipeline_tag: text-generation
inference:
parameters:
temperature: 0.2
top_p: 0.95
widget:
- text: 'def print_hello_world():'
example_title: Hello world
group: Python
datasets:
- bigcode/the-stack-v2-train
license: bigcode-openrail-m
library_name: transformers
tags:
- code
- mlx
base_model: bigcode/starcoder2-15b
model-index:
- name: starcoder2-15b
results:
- task:
type: text-generation
dataset:
name: CruxEval-I
type: cruxeval-i
metrics:
- type: pass@1
value: 48.1
- task:
type: text-generation
dataset:
name: DS-1000
type: ds-1000
metrics:
- type: pass@1
value: 33.8
- task:
type: text-generation
dataset:
name: GSM8K (PAL)
type: gsm8k-pal
metrics:
- type: accuracy
value: 65.1
- task:
type: text-generation
dataset:
name: HumanEval+
type: humanevalplus
metrics:
- type: pass@1
value: 37.8
- task:
type: text-generation
dataset:
name: HumanEval
type: humaneval
metrics:
- type: pass@1
value: 46.3
- task:
type: text-generation
dataset:
name: RepoBench-v1.1
type: repobench-v1.1
metrics:
- type: edit-smiliarity
value: 74.08
mlx-community/bigcode-starcoder2-15b-6bit
The Model mlx-community/bigcode-starcoder2-15b-6bit was converted to MLX format from bigcode/starcoder2-15b using mlx-lm version 0.21.1.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/bigcode-starcoder2-15b-6bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)