Neo_7b-merge13
Neo_7b-merge13 is a merge of the following models using LazyMergekit:
𧩠Configuration
# Define the slices for the model merging process
slices:
- sources:
# First part: merge layer 0 with layer 3
- model: DewEfresh/neo_7b
layer_range: [0, 0]
- model: DewEfresh/neo_7b
layer_range: [3, 3]
- sources:
# Second part: merge layer 1 with layer 3
- model: DewEfresh/neo_7b
layer_range: [1, 1]
- model: DewEfresh/neo_7b
layer_range: [3, 3]
- sources:
# Third part: merge layer 2 with layer 3
- model: DewEfresh/neo_7b
layer_range: [2, 2]
- model: DewEfresh/neo_7b
layer_range: [3, 3]
- sources:
# Fourth part: layers 4 to 27 from the original model
- model: DewEfresh/neo_7b
layer_range: [4, 27]
# Specify the merging method for the slices
merge_method: slerp
base_model: DewEfresh/neo_7b
parameters:
t: 0.3333
dtype: bfloat16
π» Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "DewEfresh/Neo_7b-merge13"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
- Downloads last month
- 10
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for DewEfresh/Neo_7b-merge13
Base model
DewEfresh/neo_7b