license: apache-2.0 | |
language: | |
- en | |
# **Introduction** | |
We introduce luxia-21.4b-alignment-v1.0, an instruction-tuned and alignment model based on luxia-21.4b. | |
Please refer to the evaluation results table for details. | |
# **Instruction Fine-tuning Strategy** | |
We utilize state-of-the-art instruction fine-tuning methods including supervised fine-tuning (SFT) and direct preference optimization (DPO) | |
# **Data Contamination Test Results** | |
Results will be updated soon. | |
# **Evaluation Results** | |
Results will be updated soon. | |
# **Usage Instructions** | |
### **How to use** | |
```python | |
# pip install transformers==4.35.2 | |
import torch | |
from transformers import AutoModelForCausalLM, AutoTokenizer | |
tokenizer = AutoTokenizer.from_pretrained("saltlux/luxia-21.4b-alignment-v0.1") | |
model = AutoModelForCausalLM.from_pretrained( | |
"saltlux/luxia-21.4b-alignment-v0.1", | |
device_map="auto", | |
torch_dtype=torch.float16, | |
) | |
``` | |
### **License** | |
- [saltlux/luxia-21.4b-alignment-v1.0](https://huggingface.co/saltlux/luxia-21.4b-alignment-v1.0): apache-2.0 | |
### **Contact Us** ### | |
Any questions and suggestions are welcomed at the discussion tab. |