File size: 1,191 Bytes
ae7f6d1 35ddb3e ae7f6d1 35ddb3e ae7f6d1 35ddb3e ae7f6d1 35ddb3e ae7f6d1 35ddb3e ae7f6d1 35ddb3e ae7f6d1 35ddb3e ae7f6d1 35ddb3e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
---
library_name: transformers
license: apache-2.0
base_model:
- ibm-granite/granite-3b-code-instruct-128k
---
Bitsandbytes quantization of https://huggingface.co/ibm-granite/granite-3b-code-instruct-128k.
See https://huggingface.co/blog/4bit-transformers-bitsandbytes for instructions.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import BitsAndBytesConfig
import torch
# Define the 4-bit configuration
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
# Load the pre-trained model with the 4-bit quantization configuration
model = AutoModelForCausalLM.from_pretrained("ibm-granite/granite-3b-code-instruct-128k", quantization_config=nf4_config)
# Load the tokenizer associated with the model
tokenizer = AutoTokenizer.from_pretrained("ibm-granite/granite-3b-code-instruct-128k")
# Push the model and tokenizer to the Hugging Face hub
model.push_to_hub("onekq-ai/granite-3b-code-instruct-128k-bnb-4bit", use_auth_token=True)
tokenizer.push_to_hub("onekq-ai/granite-3b-code-instruct-128k-bnb-4bit", use_auth_token=True)
``` |