dahara1's picture
Update README.md
40f8719
|
raw
history blame
1.23 kB

original model weblab-10b-instruction-sft

This is 4bit GPTQ Version.

The size is smaller and the execution speed is faster, but the inference performance may be a little worse.

Benchmark results are in progress. I will upload it at a later date.

sample code

pip install auto-gptq
from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM

quantized_model_dir = "dahara1/weblab-10b-instruction-sft-GPTQ"
model_basename = "gptq_model-4bit-128g"

tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir)

model = AutoGPTQForCausalLM.from_quantized(
        quantized_model_dir,
        model_basename=model_basename,
        use_safetensors=True,
        device="cuda:0")

prompt = "スタジオジブリの作品を5つ教えてください"
prompt_template = f"### Instruction: {prompt}\n### Response:"

tokens = tokenizer(prompt_template, return_tensors="pt").to("cuda:0").input_ids
output = model.generate(input_ids=tokens, max_new_tokens=100, do_sample=True, temperature=0.8)
print(tokenizer.decode(output[0]))

See Also

https://github.com/PanQiWei/AutoGPTQ/blob/main/docs/tutorial/01-Quick-Start.md